From: "Mi, Dapeng" <dapeng1.mi@linux.intel.com>
To: Sean Christopherson <seanjc@google.com>,
Mingwei Zhang <mizhang@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Xiong Zhang <xiong.y.zhang@intel.com>,
Kan Liang <kan.liang@intel.com>,
Zhenyu Wang <zhenyuw@linux.intel.com>,
Manali Shukla <manali.shukla@amd.com>,
Sandipan Das <sandipan.das@amd.com>,
Jim Mattson <jmattson@google.com>,
Stephane Eranian <eranian@google.com>,
Ian Rogers <irogers@google.com>,
Namhyung Kim <namhyung@kernel.org>,
gce-passthrou-pmu-dev@google.com,
Samantha Alt <samantha.alt@intel.com>,
Zhiyuan Lv <zhiyuan.lv@intel.com>,
Yanfei Xu <yanfei.xu@intel.com>,
Like Xu <like.xu.linux@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Raghavendra Rao Ananta <rananta@google.com>,
kvm@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [RFC PATCH v3 44/58] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU
Date: Thu, 21 Nov 2024 10:27:38 +0800 [thread overview]
Message-ID: <a6ee6477-0961-40d2-8098-a4b1d0a14140@linux.intel.com> (raw)
In-Reply-To: <Zz5DBddNFb-gZra1@google.com>
On 11/21/2024 4:13 AM, Sean Christopherson wrote:
> On Thu, Aug 01, 2024, Mingwei Zhang wrote:
>> Implement emulated counter increment for passthrough PMU under KVM_REQ_PMU.
>> Defer the counter increment to KVM_REQ_PMU handler because counter
>> increment requests come from kvm_pmu_trigger_event() which can be triggered
>> within the KVM_RUN inner loop or outside of the inner loop. This means the
>> counter increment could happen before or after PMU context switch.
>>
>> So process counter increment in one place makes the implementation simple.
>>
>> Signed-off-by: Mingwei Zhang <mizhang@google.com>
>> Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>> ---
>> arch/x86/kvm/pmu.c | 41 +++++++++++++++++++++++++++++++++++++++--
>> 1 file changed, 39 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
>> index 5cc539bdcc7e..41057d0122bd 100644
>> --- a/arch/x86/kvm/pmu.c
>> +++ b/arch/x86/kvm/pmu.c
>> @@ -510,6 +510,18 @@ static int reprogram_counter(struct kvm_pmc *pmc)
>> eventsel & ARCH_PERFMON_EVENTSEL_INT);
>> }
>>
>> +static void kvm_pmu_handle_event_in_passthrough_pmu(struct kvm_vcpu *vcpu)
>> +{
>> + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
>> +
>> + static_call_cond(kvm_x86_pmu_set_overflow)(vcpu);
>> +
>> + if (atomic64_read(&pmu->__reprogram_pmi)) {
>> + kvm_make_request(KVM_REQ_PMI, vcpu);
>> + atomic64_set(&pmu->__reprogram_pmi, 0ull);
>> + }
>> +}
>> +
>> void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
>> {
>> DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX);
>> @@ -517,6 +529,9 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
>> struct kvm_pmc *pmc;
>> int bit;
>>
>> + if (is_passthrough_pmu_enabled(vcpu))
>> + return kvm_pmu_handle_event_in_passthrough_pmu(vcpu);
>> +
>> bitmap_copy(bitmap, pmu->reprogram_pmi, X86_PMC_IDX_MAX);
>>
>> /*
>> @@ -848,6 +863,17 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu)
>> kvm_pmu_reset(vcpu);
>> }
>>
>> +static void kvm_passthrough_pmu_incr_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>> +{
>> + if (static_call(kvm_x86_pmu_incr_counter)(pmc)) {
> This is absurd. It's the same ugly code in both Intel and AMD.
>
> static bool intel_incr_counter(struct kvm_pmc *pmc)
> {
> pmc->counter += 1;
> pmc->counter &= pmc_bitmask(pmc);
>
> if (!pmc->counter)
> return true;
>
> return false;
> }
>
> static bool amd_incr_counter(struct kvm_pmc *pmc)
> {
> pmc->counter += 1;
> pmc->counter &= pmc_bitmask(pmc);
>
> if (!pmc->counter)
> return true;
>
> return false;
> }
>
>> + __set_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->global_status);
> Using __set_bit() is unnecessary, ugly, and dangerous. KVM uses set_bit(), no
> underscores, for things like reprogram_pmi because the updates need to be atomic.
>
> The downside of __set_bit() and friends is that if pmc->idx is garbage, KVM will
> clobber memory, whereas BIT_ULL(pmc->idx) is "just" undefined behavior. But
> dropping the update is far better than clobbering memory, and can be detected by
> UBSAN (though I doubt anyone is hitting this code with UBSAN).
>
> For this code, a regular ol' bitwise-OR will suffice.
>
>> + kvm_make_request(KVM_REQ_PMU, vcpu);
>> +
>> + if (pmc->eventsel & ARCH_PERFMON_EVENTSEL_INT)
>> + set_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi);
> This is badly in need of a comment, and the ordering is unnecessarily weird.
> Set bits in reprogram_pmi *before* making the request. It doesn't matter here
> since this is all on the same vCPU, but it's good practice since KVM_REQ_XXX
> provides the necessary barriers to allow for safe, correct cross-CPU updates.
>
> That said, why on earth is the mediated PMU using KVM_REQ_PMU? Set global_status
> and KVM_REQ_PMI, done.
>
>> + }
>> +}
>> +
>> static void kvm_pmu_incr_counter(struct kvm_pmc *pmc)
>> {
>> pmc->emulated_counter++;
>> @@ -880,7 +906,8 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc)
>> return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user;
>> }
>>
>> -void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel)
>> +static void __kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel,
>> + bool is_passthrough)
>> {
>> DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX);
>> struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
>> @@ -914,9 +941,19 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel)
>> !pmc_event_is_allowed(pmc) || !cpl_is_matched(pmc))
>> continue;
>>
>> - kvm_pmu_incr_counter(pmc);
>> + if (is_passthrough)
>> + kvm_passthrough_pmu_incr_counter(vcpu, pmc);
>> + else
>> + kvm_pmu_incr_counter(pmc);
>> }
>> }
>> +
>> +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel)
>> +{
>> + bool is_passthrough = is_passthrough_pmu_enabled(vcpu);
>> +
>> + __kvm_pmu_trigger_event(vcpu, eventsel, is_passthrough);
> Using an inner helper for this is silly, even if the mediated information were
> snapshot per-vCPU. Just grab the snapshot in a local variable. Using a param
> adds no value and unnecessarily obfuscates the code.
>
> That's all a moot point though, because (a) KVM can check enable_mediated_pmu
> directy and (b) pivoting on behavior belongs in kvm_pmu_incr_counter(), not here.
>
> And I am leaning towards having the mediated vs. perf-based code live in the same
> function, unless one or both is "huge", so that it's easier to understand and
> appreciate the differences in the implementations.
>
> Not an action item for y'all, but this is also a great time to add comments, which
> are sorely lacking in the code. I am more than happy to do that, as it helps me
> understand (and thus review) the code. I'll throw in suggestions here and there
> as I review.
>
> Anyways, this?
>
> static void kvm_pmu_incr_counter(struct kvm_pmc *pmc)
> {
> /*
> * For perf-based PMUs, accumulate software-emulated events separately
> * from pmc->counter, as pmc->counter is offset by the count of the
> * associated perf event. Request reprogramming, which will consult
> * both emulated and hardware-generated events to detect overflow.
> */
> if (!enable_mediated_pmu) {
> pmc->emulated_counter++;
> kvm_pmu_request_counter_reprogram(pmc);
> return;
> }
>
> /*
> * For mediated PMUs, pmc->counter is updated when the vCPU's PMU is
> * put, and will be loaded into hardware when the PMU is loaded. Simply
> * increment the counter and signal overflow if it wraps to zero.
> */
> pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc);
> if (!pmc->counter) {
> pmc_to_pmu(pmc)->global_status) |= BIT_ULL(pmc->idx);
> kvm_make_request(KVM_REQ_PMI, vcpu);
> }
> }
Yes, thanks.
next prev parent reply other threads:[~2024-11-21 2:27 UTC|newest]
Thread overview: 183+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-01 4:58 [RFC PATCH v3 00/58] Mediated Passthrough vPMU 3.0 for x86 Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 01/58] sched/core: Move preempt_model_*() helpers from sched.h to preempt.h Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 02/58] sched/core: Drop spinlocks on contention iff kernel is preemptible Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 03/58] perf/x86: Do not set bit width for unavailable counters Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 04/58] x86/msr: Define PerfCntrGlobalStatusSet register Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 05/58] x86/msr: Introduce MSR_CORE_PERF_GLOBAL_STATUS_SET Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 06/58] perf: Support get/put passthrough PMU interfaces Mingwei Zhang
2024-09-06 10:59 ` Mi, Dapeng
2024-09-06 15:40 ` Liang, Kan
2024-09-09 22:17 ` Namhyung Kim
2024-08-01 4:58 ` [RFC PATCH v3 07/58] perf: Skip pmu_ctx based on event_type Mingwei Zhang
2024-10-11 11:18 ` Peter Zijlstra
2024-08-01 4:58 ` [RFC PATCH v3 08/58] perf: Clean up perf ctx time Mingwei Zhang
2024-10-11 11:39 ` Peter Zijlstra
2024-08-01 4:58 ` [RFC PATCH v3 09/58] perf: Add a EVENT_GUEST flag Mingwei Zhang
2024-08-21 5:27 ` Mi, Dapeng
2024-08-21 13:16 ` Liang, Kan
2024-10-11 11:41 ` Peter Zijlstra
2024-10-11 13:16 ` Liang, Kan
2024-10-11 18:42 ` Peter Zijlstra
2024-10-11 19:49 ` Liang, Kan
2024-10-14 10:55 ` Peter Zijlstra
2024-10-14 11:14 ` Peter Zijlstra
2024-10-14 15:06 ` Liang, Kan
2024-12-13 9:37 ` Sandipan Das
2024-12-13 16:26 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 10/58] perf: Add generic exclude_guest support Mingwei Zhang
2024-10-14 11:20 ` Peter Zijlstra
2024-10-14 15:27 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 11/58] x86/irq: Factor out common code for installing kvm irq handler Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 12/58] perf: core/x86: Register a new vector for KVM GUEST PMI Mingwei Zhang
2024-09-09 22:11 ` Colton Lewis
2024-09-10 4:59 ` Mi, Dapeng
2024-09-10 16:45 ` Colton Lewis
2024-08-01 4:58 ` [RFC PATCH v3 13/58] KVM: x86/pmu: Register KVM_GUEST_PMI_VECTOR handler Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 14/58] perf: Add switch_interrupt() interface Mingwei Zhang
2024-09-19 6:02 ` Manali Shukla
2024-09-19 13:00 ` Liang, Kan
2024-09-20 5:09 ` Manali Shukla
2024-09-23 18:49 ` Mingwei Zhang
2024-09-24 16:55 ` Manali Shukla
2024-10-14 11:59 ` Peter Zijlstra
2024-10-14 16:15 ` Liang, Kan
2024-10-14 17:45 ` Peter Zijlstra
2024-10-15 15:59 ` Liang, Kan
2024-10-14 11:56 ` Peter Zijlstra
2024-10-14 15:40 ` Liang, Kan
2024-10-14 17:47 ` Peter Zijlstra
2024-10-14 17:51 ` Peter Zijlstra
2024-10-14 12:03 ` Peter Zijlstra
2024-10-14 15:51 ` Liang, Kan
2024-10-14 17:49 ` Peter Zijlstra
2024-10-15 13:23 ` Liang, Kan
2024-10-14 13:52 ` Peter Zijlstra
2024-10-14 15:57 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 15/58] perf/x86: Support switch_interrupt interface Mingwei Zhang
2024-09-09 22:11 ` Colton Lewis
2024-09-10 5:00 ` Mi, Dapeng
2024-10-24 19:45 ` Chen, Zide
2024-10-25 0:52 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 16/58] perf/x86: Forbid PMI handler when guest own PMU Mingwei Zhang
2024-09-02 7:56 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 17/58] perf: core/x86: Plumb passthrough PMU capability from x86_pmu to x86_pmu_cap Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 18/58] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter Mingwei Zhang
2024-11-19 14:30 ` Sean Christopherson
2024-11-20 3:21 ` Mi, Dapeng
2024-11-20 17:06 ` Sean Christopherson
2025-01-15 0:17 ` Mingwei Zhang
2025-01-15 2:52 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 19/58] KVM: x86/pmu: Plumb through pass-through PMU to vcpu for Intel CPUs Mingwei Zhang
2024-11-19 14:54 ` Sean Christopherson
2024-11-20 3:47 ` Mi, Dapeng
2024-11-20 16:45 ` Sean Christopherson
2024-11-21 0:29 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 20/58] KVM: x86/pmu: Always set global enable bits in passthrough mode Mingwei Zhang
2024-11-19 15:37 ` Sean Christopherson
2024-11-20 5:19 ` Mi, Dapeng
2024-11-20 17:09 ` Sean Christopherson
2024-11-21 0:37 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 21/58] KVM: x86/pmu: Add a helper to check if passthrough PMU is enabled Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 22/58] KVM: x86/pmu: Add host_perf_cap and initialize it in kvm_x86_vendor_init() Mingwei Zhang
2024-11-19 15:43 ` Sean Christopherson
2024-11-20 5:21 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 23/58] KVM: x86/pmu: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-11-19 16:32 ` Sean Christopherson
2024-11-20 5:31 ` Mi, Dapeng
2025-01-22 5:08 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 24/58] KVM: x86/pmu: Introduce macro PMU_CAP_PERF_METRICS Mingwei Zhang
2024-11-19 17:03 ` Sean Christopherson
2024-11-20 5:44 ` Mi, Dapeng
2024-11-20 17:21 ` Sean Christopherson
2024-08-01 4:58 ` [RFC PATCH v3 25/58] KVM: x86/pmu: Introduce PMU operator to check if rdpmc passthrough allowed Mingwei Zhang
2024-11-19 17:32 ` Sean Christopherson
2024-11-20 6:22 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 26/58] KVM: x86/pmu: Manage MSR interception for IA32_PERF_GLOBAL_CTRL Mingwei Zhang
2024-08-06 7:04 ` Mi, Dapeng
2024-10-24 20:26 ` Chen, Zide
2024-10-25 2:36 ` Mi, Dapeng
2024-11-19 18:16 ` Sean Christopherson
2024-11-20 7:56 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 27/58] KVM: x86/pmu: Create a function prototype to disable MSR interception Mingwei Zhang
2024-10-24 19:58 ` Chen, Zide
2024-10-25 2:50 ` Mi, Dapeng
2024-11-19 18:17 ` Sean Christopherson
2024-11-20 7:57 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 28/58] KVM: x86/pmu: Add intel_passthrough_pmu_msrs() to pass-through PMU MSRs Mingwei Zhang
2024-11-19 18:24 ` Sean Christopherson
2024-11-20 10:12 ` Mi, Dapeng
2024-11-20 18:32 ` Sean Christopherson
2024-08-01 4:58 ` [RFC PATCH v3 29/58] KVM: x86/pmu: Avoid legacy vPMU code when accessing global_ctrl in passthrough vPMU Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 30/58] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot() Mingwei Zhang
2024-09-02 7:51 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 31/58] KVM: x86/pmu: Add counter MSR and selector MSR index into struct kvm_pmc Mingwei Zhang
2024-11-19 18:58 ` Sean Christopherson
2024-11-20 11:50 ` Mi, Dapeng
2024-11-20 17:30 ` Sean Christopherson
2024-11-21 0:56 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 32/58] KVM: x86/pmu: Introduce PMU operation prototypes for save/restore PMU context Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 33/58] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU Mingwei Zhang
2024-08-06 7:27 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 34/58] KVM: x86/pmu: Make check_pmu_event_filter() an exported function Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 35/58] KVM: x86/pmu: Allow writing to event selector for GP counters if event is allowed Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 36/58] KVM: x86/pmu: Allow writing to fixed counter selector if counter is exposed Mingwei Zhang
2024-09-02 7:59 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 37/58] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary Mingwei Zhang
2024-10-24 20:26 ` Chen, Zide
2024-10-25 2:51 ` Mi, Dapeng
2024-11-19 1:46 ` Sean Christopherson
2024-11-19 5:20 ` Mi, Dapeng
2024-11-19 13:44 ` Sean Christopherson
2024-11-20 2:08 ` Mi, Dapeng
2024-10-31 3:14 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 38/58] KVM: x86/pmu: Exclude existing vLBR logic from the passthrough PMU Mingwei Zhang
2024-11-20 18:42 ` Sean Christopherson
2024-11-21 1:13 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 39/58] KVM: x86/pmu: Notify perf core at KVM context switch boundary Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 40/58] KVM: x86/pmu: Grab x86 core PMU for passthrough PMU VM Mingwei Zhang
2024-11-20 18:46 ` Sean Christopherson
2024-11-21 2:04 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 41/58] KVM: x86/pmu: Add support for PMU context switch at VM-exit/enter Mingwei Zhang
2024-10-24 19:57 ` Chen, Zide
2024-10-25 2:55 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 42/58] KVM: x86/pmu: Introduce PMU operator to increment counter Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 43/58] KVM: x86/pmu: Introduce PMU operator for setting counter overflow Mingwei Zhang
2024-10-25 16:16 ` Chen, Zide
2024-10-27 12:06 ` Mi, Dapeng
2024-11-20 18:48 ` Sean Christopherson
2024-11-21 2:05 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 44/58] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU Mingwei Zhang
2024-11-20 20:13 ` Sean Christopherson
2024-11-21 2:27 ` Mi, Dapeng [this message]
2024-08-01 4:58 ` [RFC PATCH v3 45/58] KVM: x86/pmu: Update pmc_{read,write}_counter() to disconnect perf API Mingwei Zhang
2024-11-20 20:19 ` Sean Christopherson
2024-11-21 2:52 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 46/58] KVM: x86/pmu: Disconnect counter reprogram logic from passthrough PMU Mingwei Zhang
2024-11-20 20:40 ` Sean Christopherson
2024-11-21 3:02 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 47/58] KVM: nVMX: Add nested virtualization support for " Mingwei Zhang
2024-11-20 20:52 ` Sean Christopherson
2024-11-21 3:14 ` Mi, Dapeng
2024-08-01 4:58 ` [RFC PATCH v3 48/58] perf/x86/intel: Support PERF_PMU_CAP_PASSTHROUGH_VPMU Mingwei Zhang
2024-08-02 17:50 ` Liang, Kan
2024-08-01 4:58 ` [RFC PATCH v3 49/58] KVM: x86/pmu/svm: Set passthrough capability for vcpus Mingwei Zhang
2024-08-01 4:58 ` [RFC PATCH v3 50/58] KVM: x86/pmu/svm: Set enable_passthrough_pmu module parameter Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 51/58] KVM: x86/pmu/svm: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 52/58] KVM: x86/pmu/svm: Implement callback to disable MSR interception Mingwei Zhang
2024-11-20 21:02 ` Sean Christopherson
2024-11-21 3:24 ` Mi, Dapeng
2024-08-01 4:59 ` [RFC PATCH v3 53/58] KVM: x86/pmu/svm: Set GuestOnly bit and clear HostOnly bit when guest write to event selectors Mingwei Zhang
2024-11-20 21:38 ` Sean Christopherson
2024-11-21 3:26 ` Mi, Dapeng
2024-08-01 4:59 ` [RFC PATCH v3 54/58] KVM: x86/pmu/svm: Add registers to direct access list Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 55/58] KVM: x86/pmu/svm: Implement handlers to save and restore context Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 56/58] KVM: x86/pmu/svm: Wire up PMU filtering functionality for passthrough PMU Mingwei Zhang
2024-11-20 21:39 ` Sean Christopherson
2024-11-21 3:29 ` Mi, Dapeng
2024-08-01 4:59 ` [RFC PATCH v3 57/58] KVM: x86/pmu/svm: Implement callback to increment counters Mingwei Zhang
2024-08-01 4:59 ` [RFC PATCH v3 58/58] perf/x86/amd: Support PERF_PMU_CAP_PASSTHROUGH_VPMU for AMD host Mingwei Zhang
2024-09-11 10:45 ` [RFC PATCH v3 00/58] Mediated Passthrough vPMU 3.0 for x86 Ma, Yongwei
2024-11-19 14:00 ` Sean Christopherson
2024-11-20 2:31 ` Mi, Dapeng
2024-11-20 11:55 ` Mi, Dapeng
2024-11-20 18:34 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a6ee6477-0961-40d2-8098-a4b1d0a14140@linux.intel.com \
--to=dapeng1.mi@linux.intel.com \
--cc=eranian@google.com \
--cc=gce-passthrou-pmu-dev@google.com \
--cc=irogers@google.com \
--cc=jmattson@google.com \
--cc=kan.liang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=like.xu.linux@gmail.com \
--cc=linux-perf-users@vger.kernel.org \
--cc=manali.shukla@amd.com \
--cc=mizhang@google.com \
--cc=namhyung@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rananta@google.com \
--cc=samantha.alt@intel.com \
--cc=sandipan.das@amd.com \
--cc=seanjc@google.com \
--cc=xiong.y.zhang@intel.com \
--cc=yanfei.xu@intel.com \
--cc=zhenyuw@linux.intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).