From: Sean Christopherson <seanjc@google.com>
To: Mingwei Zhang <mizhang@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Xiong Zhang <xiong.y.zhang@intel.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>,
Kan Liang <kan.liang@intel.com>,
Zhenyu Wang <zhenyuw@linux.intel.com>,
Manali Shukla <manali.shukla@amd.com>,
Sandipan Das <sandipan.das@amd.com>,
Jim Mattson <jmattson@google.com>,
Stephane Eranian <eranian@google.com>,
Ian Rogers <irogers@google.com>,
Namhyung Kim <namhyung@kernel.org>,
gce-passthrou-pmu-dev@google.com,
Samantha Alt <samantha.alt@intel.com>,
Zhiyuan Lv <zhiyuan.lv@intel.com>,
Yanfei Xu <yanfei.xu@intel.com>,
Like Xu <like.xu.linux@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Raghavendra Rao Ananta <rananta@google.com>,
kvm@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [RFC PATCH v3 18/58] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter
Date: Tue, 19 Nov 2024 06:30:23 -0800 [thread overview]
Message-ID: <Zzyg_0ACKwHGLC7w@google.com> (raw)
In-Reply-To: <20240801045907.4010984-19-mizhang@google.com>
As per my feedback in the initial RFC[*]:
2. The module param absolutely must not be exposed to userspace until all patches
are in place. The easiest way to do that without creating dependency hell is
to simply not create the module param.
[*] https://lore.kernel.org/all/ZhhQBHQ6V7Zcb8Ve@google.com
On Thu, Aug 01, 2024, Mingwei Zhang wrote:
> Introduce enable_passthrough_pmu as a RO KVM kernel module parameter. This
> variable is true only when the following conditions satisfies:
> - set to true when module loaded.
> - enable_pmu is true.
> - is running on Intel CPU.
> - supports PerfMon v4.
> - host PMU supports passthrough mode.
>
> The value is always read-only because passthrough PMU currently does not
> support features like LBR and PEBS, while emualted PMU does. This will end
> up with two different values for kvm_cap.supported_perf_cap, which is
> initialized at module load time. Maintaining two different perf
> capabilities will add complexity. Further, there is not enough motivation
> to support running two types of PMU implementations at the same time,
> although it is possible/feasible in reality.
>
> Finally, always propagate enable_passthrough_pmu and perf_capabilities into
> kvm->arch for each KVM instance.
>
> Co-developed-by: Xiong Zhang <xiong.y.zhang@linux.intel.com>
> Signed-off-by: Xiong Zhang <xiong.y.zhang@linux.intel.com>
> Signed-off-by: Mingwei Zhang <mizhang@google.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Tested-by: Yongwei Ma <yongwei.ma@intel.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/pmu.h | 14 ++++++++++++++
> arch/x86/kvm/vmx/vmx.c | 7 +++++--
> arch/x86/kvm/x86.c | 8 ++++++++
> arch/x86/kvm/x86.h | 1 +
> 5 files changed, 29 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f8ca74e7678f..a15c783f20b9 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1406,6 +1406,7 @@ struct kvm_arch {
>
> bool bus_lock_detection_enabled;
> bool enable_pmu;
> + bool enable_passthrough_pmu;
Again, as I suggested/requested in the initial RFC[*], drop the per-VM flag as well
as kvm_pmu.passthrough. There is zero reason to cache the module param. KVM
should always query kvm->arch.enable_pmu prior to checking if the mediated PMU
is enabled, so I doubt we even need a helper to check both.
[*] https://lore.kernel.org/all/ZhhOEDAl6k-NzOkM@google.com
>
> u32 notify_window;
> u32 notify_vmexit_flags;
> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index 4d52b0b539ba..cf93be5e7359 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -208,6 +208,20 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
> enable_pmu = false;
> }
>
> + /* Pass-through vPMU is only supported in Intel CPUs. */
> + if (!is_intel)
> + enable_passthrough_pmu = false;
> +
> + /*
> + * Pass-through vPMU requires at least PerfMon version 4 because the
> + * implementation requires the usage of MSR_CORE_PERF_GLOBAL_STATUS_SET
> + * for counter emulation as well as PMU context switch. In addition, it
> + * requires host PMU support on passthrough mode. Disable pass-through
> + * vPMU if any condition fails.
> + */
> + if (!enable_pmu || kvm_pmu_cap.version < 4 || !kvm_pmu_cap.passthrough)
As is quite obvious by the end of the series, the v4 requirement is specific to
Intel.
if (!enable_pmu || !kvm_pmu_cap.passthrough ||
(is_intel && kvm_pmu_cap.version < 4) ||
(is_amd && kvm_pmu_cap.version < 2))
enable_passthrough_pmu = false;
Furthermore, there is zero reason to explicitly and manually check the vendor,
kvm_init_pmu_capability() takes kvm_pmu_ops. Adding a callback is somewhat
undesirable as it would lead to duplicate code, but we can still provide separation
of concerns by adding const variables to kvm_pmu_ops, a la MAX_NR_GP_COUNTERS.
E.g.
if (enable_pmu) {
perf_get_x86_pmu_capability(&kvm_pmu_cap);
/*
* WARN if perf did NOT disable hardware PMU if the number of
* architecturally required GP counters aren't present, i.e. if
* there are a non-zero number of counters, but fewer than what
* is architecturally required.
*/
if (!kvm_pmu_cap.num_counters_gp ||
WARN_ON_ONCE(kvm_pmu_cap.num_counters_gp < min_nr_gp_ctrs))
enable_pmu = false;
else if (pmu_ops->MIN_PMU_VERSION > kvm_pmu_cap.version)
enable_pmu = false;
}
if (!enable_pmu || !kvm_pmu_cap.passthrough ||
pmu_ops->MIN_MEDIATED_PMU_VERSION > kvm_pmu_cap.version)
enable_mediated_pmu = false;
> + enable_passthrough_pmu = false;
> +
> if (!enable_pmu) {
> memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap));
> return;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ad465881b043..2ad122995f11 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -146,6 +146,8 @@ module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO);
> extern bool __read_mostly allow_smaller_maxphyaddr;
> module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
>
> +module_param(enable_passthrough_pmu, bool, 0444);
Hmm, we either need to put this param in kvm.ko, or move enable_pmu to vendor
modules (or duplicate it there if we need to for backwards compatibility?).
There are advantages to putting params in vendor modules, when it's safe to do so,
e.g. it allows toggling the param when (re)loading a vendor module, so I think I'm
supportive of having the param live in vendor code. I just don't want to split
the two PMU knobs.
> #define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD)
> #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE
> #define KVM_VM_CR0_ALWAYS_ON \
> @@ -7924,7 +7926,8 @@ static __init u64 vmx_get_perf_capabilities(void)
> if (boot_cpu_has(X86_FEATURE_PDCM))
> rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap);
>
> - if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR)) {
> + if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR) &&
> + !enable_passthrough_pmu) {
> x86_perf_get_lbr(&vmx_lbr_caps);
>
> /*
> @@ -7938,7 +7941,7 @@ static __init u64 vmx_get_perf_capabilities(void)
> perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
> }
>
> - if (vmx_pebs_supported()) {
> + if (vmx_pebs_supported() && !enable_passthrough_pmu) {
Checking enable_mediated_pmu belongs in vmx_pebs_supported(), not in here,
otherwise KVM will incorrectly advertise support to userspace:
if (vmx_pebs_supported()) {
kvm_cpu_cap_check_and_set(X86_FEATURE_DS);
kvm_cpu_cap_check_and_set(X86_FEATURE_DTES64);
}
> perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK;
> /*
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f1d589c07068..0c40f551130e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -187,6 +187,10 @@ bool __read_mostly enable_pmu = true;
> EXPORT_SYMBOL_GPL(enable_pmu);
> module_param(enable_pmu, bool, 0444);
>
> +/* Enable/disable mediated passthrough PMU virtualization */
> +bool __read_mostly enable_passthrough_pmu;
> +EXPORT_SYMBOL_GPL(enable_passthrough_pmu);
> +
> bool __read_mostly eager_page_split = true;
> module_param(eager_page_split, bool, 0644);
>
> @@ -6682,6 +6686,9 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
> mutex_lock(&kvm->lock);
> if (!kvm->created_vcpus) {
> kvm->arch.enable_pmu = !(cap->args[0] & KVM_PMU_CAP_DISABLE);
> + /* Disable passthrough PMU if enable_pmu is false. */
> + if (!kvm->arch.enable_pmu)
> + kvm->arch.enable_passthrough_pmu = false;
And this code obviously goes away if the per-VM snapshot is removed.
next prev parent reply other threads:[~2024-11-19 14:30 UTC|newest]
Thread overview: 183+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-01 4:58 [RFC PATCH v3 00/58] Mediated Passthrough vPMU 3.0 for x86 Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 01/58] sched/core: Move preempt_model_*() helpers from sched.h to preempt.h Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 02/58] sched/core: Drop spinlocks on contention iff kernel is preemptible Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 03/58] perf/x86: Do not set bit width for unavailable counters Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 04/58] x86/msr: Define PerfCntrGlobalStatusSet register Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 05/58] x86/msr: Introduce MSR_CORE_PERF_GLOBAL_STATUS_SET Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 06/58] perf: Support get/put passthrough PMU interfaces Mingwei Zhang
2024-09-06 10:59 ` Mi, Dapeng
2024-09-06 15:40 ` Liang, Kan
2024-09-09 22:17 ` Namhyung Kim
2024-08-01 4:58 ` [RFC PATCH v3 07/58] perf: Skip pmu_ctx based on event_type Mingwei Zhang
2024-10-11 11:18 ` Peter Zijlstra
2024-08-01 4:58 ` [RFC PATCH v3 08/58] perf: Clean up perf ctx time Mingwei Zhang
2024-10-11 11:39 ` Peter Zijlstra
2024-08-01 4:58 ` [RFC PATCH v3 09/58] perf: Add a EVENT_GUEST flag Mingwei Zhang
2024-08-21 5:27 ` Mi, Dapeng
2024-08-21 13:16 ` Liang, Kan
2024-10-11 11:41 ` Peter Zijlstra
2024-10-11 13:16 ` Liang, Kan
2024-10-11 18:42 ` Peter Zijlstra
2024-10-11 19:49 ` Liang, Kan
2024-10-14 10:55 ` Peter Zijlstra
2024-10-14 11:14 ` Peter Zijlstra
2024-10-14 15:06 ` Liang, Kan
2024-12-13 9:37 ` Sandipan Das
2024-12-13 16:26 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 10/58] perf: Add generic exclude_guest support Mingwei Zhang
2024-10-14 11:20 ` Peter Zijlstra
2024-10-14 15:27 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 11/58] x86/irq: Factor out common code for installing kvm irq handler Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 12/58] perf: core/x86: Register a new vector for KVM GUEST PMI Mingwei Zhang
2024-09-09 22:11 ` Colton Lewis
2024-09-10 4:59 ` Mi, Dapeng
2024-09-10 16:45 ` Colton Lewis
2024-08-01 4:58 ` [RFC PATCH v3 13/58] KVM: x86/pmu: Register KVM_GUEST_PMI_VECTOR handler Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 14/58] perf: Add switch_interrupt() interface Mingwei Zhang
2024-09-19 6:02 ` Manali Shukla
2024-09-19 13:00 ` Liang, Kan
2024-09-20 5:09 ` Manali Shukla
2024-09-23 18:49 ` Mingwei Zhang
2024-09-24 16:55 ` Manali Shukla
2024-10-14 11:59 ` Peter Zijlstra
2024-10-14 16:15 ` Liang, Kan
2024-10-14 17:45 ` Peter Zijlstra
2024-10-15 15:59 ` Liang, Kan
2024-10-14 11:56 ` Peter Zijlstra
2024-10-14 15:40 ` Liang, Kan
2024-10-14 17:47 ` Peter Zijlstra
2024-10-14 17:51 ` Peter Zijlstra
2024-10-14 12:03 ` Peter Zijlstra
2024-10-14 15:51 ` Liang, Kan
2024-10-14 17:49 ` Peter Zijlstra
2024-10-15 13:23 ` Liang, Kan
2024-10-14 13:52 ` Peter Zijlstra
2024-10-14 15:57 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 15/58] perf/x86: Support switch_interrupt interface Mingwei Zhang
2024-09-09 22:11 ` Colton Lewis
2024-09-10 5:00 ` Mi, Dapeng
2024-10-24 19:45 ` Chen, Zide
2024-10-25 0:52 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 16/58] perf/x86: Forbid PMI handler when guest own PMU Mingwei Zhang
2024-09-02 7:56 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 17/58] perf: core/x86: Plumb passthrough PMU capability from x86_pmu to x86_pmu_cap Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 18/58] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter Mingwei Zhang
2024-11-19 14:30 ` Sean Christopherson [this message]
2024-11-20 3:21 ` Mi, Dapeng
2024-11-20 17:06 ` Sean Christopherson
2025-01-15 0:17 ` Mingwei Zhang
2025-01-15 2:52 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 19/58] KVM: x86/pmu: Plumb through pass-through PMU to vcpu for Intel CPUs Mingwei Zhang
2024-11-19 14:54 ` Sean Christopherson
2024-11-20 3:47 ` Mi, Dapeng
2024-11-20 16:45 ` Sean Christopherson
2024-11-21 0:29 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 20/58] KVM: x86/pmu: Always set global enable bits in passthrough mode Mingwei Zhang
2024-11-19 15:37 ` Sean Christopherson
2024-11-20 5:19 ` Mi, Dapeng
2024-11-20 17:09 ` Sean Christopherson
2024-11-21 0:37 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 21/58] KVM: x86/pmu: Add a helper to check if passthrough PMU is enabled Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 22/58] KVM: x86/pmu: Add host_perf_cap and initialize it in kvm_x86_vendor_init() Mingwei Zhang
2024-11-19 15:43 ` Sean Christopherson
2024-11-20 5:21 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 23/58] KVM: x86/pmu: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-11-19 16:32 ` Sean Christopherson
2024-11-20 5:31 ` Mi, Dapeng
2025-01-22 5:08 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 24/58] KVM: x86/pmu: Introduce macro PMU_CAP_PERF_METRICS Mingwei Zhang
2024-11-19 17:03 ` Sean Christopherson
2024-11-20 5:44 ` Mi, Dapeng
2024-11-20 17:21 ` Sean Christopherson
2024-08-01 4:58 ` [RFC PATCH v3 25/58] KVM: x86/pmu: Introduce PMU operator to check if rdpmc passthrough allowed Mingwei Zhang
2024-11-19 17:32 ` Sean Christopherson
2024-11-20 6:22 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 26/58] KVM: x86/pmu: Manage MSR interception for IA32_PERF_GLOBAL_CTRL Mingwei Zhang
2024-08-06 7:04 ` Mi, Dapeng
2024-10-24 20:26 ` Chen, Zide
2024-10-25 2:36 ` Mi, Dapeng
2024-11-19 18:16 ` Sean Christopherson
2024-11-20 7:56 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 27/58] KVM: x86/pmu: Create a function prototype to disable MSR interception Mingwei Zhang
2024-10-24 19:58 ` Chen, Zide
2024-10-25 2:50 ` Mi, Dapeng
2024-11-19 18:17 ` Sean Christopherson
2024-11-20 7:57 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 28/58] KVM: x86/pmu: Add intel_passthrough_pmu_msrs() to pass-through PMU MSRs Mingwei Zhang
2024-11-19 18:24 ` Sean Christopherson
2024-11-20 10:12 ` Mi, Dapeng
2024-11-20 18:32 ` Sean Christopherson
2024-08-01 4:58 ` [RFC PATCH v3 29/58] KVM: x86/pmu: Avoid legacy vPMU code when accessing global_ctrl in passthrough vPMU Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 30/58] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot() Mingwei Zhang
2024-09-02 7:51 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 31/58] KVM: x86/pmu: Add counter MSR and selector MSR index into struct kvm_pmc Mingwei Zhang
2024-11-19 18:58 ` Sean Christopherson
2024-11-20 11:50 ` Mi, Dapeng
2024-11-20 17:30 ` Sean Christopherson
2024-11-21 0:56 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 32/58] KVM: x86/pmu: Introduce PMU operation prototypes for save/restore PMU context Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 33/58] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU Mingwei Zhang
2024-08-06 7:27 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 34/58] KVM: x86/pmu: Make check_pmu_event_filter() an exported function Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 35/58] KVM: x86/pmu: Allow writing to event selector for GP counters if event is allowed Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 36/58] KVM: x86/pmu: Allow writing to fixed counter selector if counter is exposed Mingwei Zhang
2024-09-02 7:59 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 37/58] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary Mingwei Zhang
2024-10-24 20:26 ` Chen, Zide
2024-10-25 2:51 ` Mi, Dapeng
2024-11-19 1:46 ` Sean Christopherson
2024-11-19 5:20 ` Mi, Dapeng
2024-11-19 13:44 ` Sean Christopherson
2024-11-20 2:08 ` Mi, Dapeng
2024-10-31 3:14 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 38/58] KVM: x86/pmu: Exclude existing vLBR logic from the passthrough PMU Mingwei Zhang
2024-11-20 18:42 ` Sean Christopherson
2024-11-21 1:13 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 39/58] KVM: x86/pmu: Notify perf core at KVM context switch boundary Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 40/58] KVM: x86/pmu: Grab x86 core PMU for passthrough PMU VM Mingwei Zhang
2024-11-20 18:46 ` Sean Christopherson
2024-11-21 2:04 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 41/58] KVM: x86/pmu: Add support for PMU context switch at VM-exit/enter Mingwei Zhang
2024-10-24 19:57 ` Chen, Zide
2024-10-25 2:55 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 42/58] KVM: x86/pmu: Introduce PMU operator to increment counter Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 43/58] KVM: x86/pmu: Introduce PMU operator for setting counter overflow Mingwei Zhang
2024-10-25 16:16 ` Chen, Zide
2024-10-27 12:06 ` Mi, Dapeng
2024-11-20 18:48 ` Sean Christopherson
2024-11-21 2:05 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 44/58] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU Mingwei Zhang
2024-11-20 20:13 ` Sean Christopherson
2024-11-21 2:27 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 45/58] KVM: x86/pmu: Update pmc_{read,write}_counter() to disconnect perf API Mingwei Zhang
2024-11-20 20:19 ` Sean Christopherson
2024-11-21 2:52 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 46/58] KVM: x86/pmu: Disconnect counter reprogram logic from passthrough PMU Mingwei Zhang
2024-11-20 20:40 ` Sean Christopherson
2024-11-21 3:02 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 47/58] KVM: nVMX: Add nested virtualization support for " Mingwei Zhang
2024-11-20 20:52 ` Sean Christopherson
2024-11-21 3:14 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 48/58] perf/x86/intel: Support PERF_PMU_CAP_PASSTHROUGH_VPMU Mingwei Zhang
2024-08-02 17:50 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 49/58] KVM: x86/pmu/svm: Set passthrough capability for vcpus Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 50/58] KVM: x86/pmu/svm: Set enable_passthrough_pmu module parameter Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 51/58] KVM: x86/pmu/svm: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 52/58] KVM: x86/pmu/svm: Implement callback to disable MSR interception Mingwei Zhang
2024-11-20 21:02 ` Sean Christopherson
2024-11-21 3:24 ` Mi, Dapeng
2024-08-01 4:59 ` [RFC PATCH v3 53/58] KVM: x86/pmu/svm: Set GuestOnly bit and clear HostOnly bit when guest write to event selectors Mingwei Zhang
2024-11-20 21:38 ` Sean Christopherson
2024-11-21 3:26 ` Mi, Dapeng
2024-08-01 4:59 ` [RFC PATCH v3 54/58] KVM: x86/pmu/svm: Add registers to direct access list Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 55/58] KVM: x86/pmu/svm: Implement handlers to save and restore context Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 56/58] KVM: x86/pmu/svm: Wire up PMU filtering functionality for passthrough PMU Mingwei Zhang
2024-11-20 21:39 ` Sean Christopherson
2024-11-21 3:29 ` Mi, Dapeng
2024-08-01 4:59 ` [RFC PATCH v3 57/58] KVM: x86/pmu/svm: Implement callback to increment counters Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 58/58] perf/x86/amd: Support PERF_PMU_CAP_PASSTHROUGH_VPMU for AMD host Mingwei Zhang
2024-09-11 10:45 ` [RFC PATCH v3 00/58] Mediated Passthrough vPMU 3.0 for x86 Ma, Yongwei
2024-11-19 14:00 ` Sean Christopherson
2024-11-20 2:31 ` Mi, Dapeng
2024-11-20 11:55 ` Mi, Dapeng
2024-11-20 18:34 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zzyg_0ACKwHGLC7w@google.com \
--to=seanjc@google.com \
--cc=dapeng1.mi@linux.intel.com \
--cc=eranian@google.com \
--cc=gce-passthrou-pmu-dev@google.com \
--cc=irogers@google.com \
--cc=jmattson@google.com \
--cc=kan.liang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=like.xu.linux@gmail.com \
--cc=linux-perf-users@vger.kernel.org \
--cc=manali.shukla@amd.com \
--cc=mizhang@google.com \
--cc=namhyung@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rananta@google.com \
--cc=samantha.alt@intel.com \
--cc=sandipan.das@amd.com \
--cc=xiong.y.zhang@intel.com \
--cc=yanfei.xu@intel.com \
--cc=zhenyuw@linux.intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).