* [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting
@ 2022-12-09 19:49 Aaron Lewis
2022-12-09 19:49 ` [PATCH 1/2] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Aaron Lewis @ 2022-12-09 19:49 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
This series fixes an issue with the PMU event "Instructions Retired"
(0xc0), then tests the fix to verify it works. Running the test
updates without the fix will result in the following test assert.
"Disallowed PMU event, instructions retired, is counting"
Aaron Lewis (2):
KVM: x86/pmu: Prevent the PMU from counting disallowed events
KVM: selftests: Test the PMU event "Instructions retired"
arch/x86/kvm/pmu.c | 4 +-
.../kvm/x86_64/pmu_event_filter_test.c | 157 ++++++++++++------
2 files changed, 113 insertions(+), 48 deletions(-)
--
2.39.0.rc1.256.g54fd8350bd-goog
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH 1/2] KVM: x86/pmu: Prevent the PMU from counting disallowed events 2022-12-09 19:49 [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Aaron Lewis @ 2022-12-09 19:49 ` Aaron Lewis 2023-01-04 17:20 ` Sean Christopherson 2022-12-09 19:49 ` [PATCH 2/2] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis 2022-12-12 13:24 ` [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Like Xu 2 siblings, 1 reply; 7+ messages in thread From: Aaron Lewis @ 2022-12-09 19:49 UTC (permalink / raw) To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis When counting "Instructions Retired" (0xc0) in a guest, KVM will occasionally increment the PMU counter regardless of if that event is being filtered. This is because some PMU events are incremented via kvm_pmu_trigger_event(), which doesn't know about the event filter. Add the event filter to kvm_pmu_trigger_event(), so events that are disallowed do not increment their counters. Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions") Signed-off-by: Aaron Lewis <aaronlewis@google.com> --- arch/x86/kvm/pmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 684393c22105..b87cf35a38b7 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -581,7 +581,9 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); - if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc)) + if (!pmc || !pmc_is_enabled(pmc) || + !pmc_speculative_in_use(pmc) || + !check_pmu_event_filter(pmc)) continue; /* Ignore checks for edge detect, pin control, invert and CMASK bits */ -- 2.39.0.rc1.256.g54fd8350bd-goog ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: x86/pmu: Prevent the PMU from counting disallowed events 2022-12-09 19:49 ` [PATCH 1/2] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis @ 2023-01-04 17:20 ` Sean Christopherson 0 siblings, 0 replies; 7+ messages in thread From: Sean Christopherson @ 2023-01-04 17:20 UTC (permalink / raw) To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson On Fri, Dec 09, 2022, Aaron Lewis wrote: > When counting "Instructions Retired" (0xc0) in a guest, KVM will > occasionally increment the PMU counter regardless of if that event is > being filtered. This is because some PMU events are incremented via > kvm_pmu_trigger_event(), which doesn't know about the event filter. Add > the event filter to kvm_pmu_trigger_event(), so events that are > disallowed do not increment their counters. > > Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions") > Signed-off-by: Aaron Lewis <aaronlewis@google.com> > --- > arch/x86/kvm/pmu.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > index 684393c22105..b87cf35a38b7 100644 > --- a/arch/x86/kvm/pmu.c > +++ b/arch/x86/kvm/pmu.c > @@ -581,7 +581,9 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) > for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { > pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); > > - if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc)) > + if (!pmc || !pmc_is_enabled(pmc) || > + !pmc_speculative_in_use(pmc) || > + !check_pmu_event_filter(pmc)) reprogram_counter() has the same three checks, seems like we should combine them into a common helper. No idea what to call it though. Maybe? if (!pmc || !pmc_is_fully_enabled(pmc)) > continue; > > /* Ignore checks for edge detect, pin control, invert and CMASK bits */ > -- > 2.39.0.rc1.256.g54fd8350bd-goog > ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2/2] KVM: selftests: Test the PMU event "Instructions retired" 2022-12-09 19:49 [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Aaron Lewis 2022-12-09 19:49 ` [PATCH 1/2] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis @ 2022-12-09 19:49 ` Aaron Lewis 2023-01-04 17:35 ` Sean Christopherson 2022-12-12 13:24 ` [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Like Xu 2 siblings, 1 reply; 7+ messages in thread From: Aaron Lewis @ 2022-12-09 19:49 UTC (permalink / raw) To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis Add testing for the event "Instructions retired" (0xc0) in the PMU event filter on both Intel and AMD to ensure that the event doesn't count when it is disallowed. Unlike most of the other events, the event "Instructions retired", will be incremented by KVM when an instruction is emulated. Test that this case is being properly handled and that KVM doesn't increment the counter when that event is disallowed. Signed-off-by: Aaron Lewis <aaronlewis@google.com> --- .../kvm/x86_64/pmu_event_filter_test.c | 157 ++++++++++++------ 1 file changed, 110 insertions(+), 47 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 2de98fce7edd..81311af9522a 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -54,6 +54,21 @@ #define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0) + +/* + * "Retired instructions", from Processor Programming Reference + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, + * Preliminary Processor Programming Reference (PPR) for AMD Family + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision + * B1 Processors Volume 1 of 2. + * --- and --- + * "Instructions retired", from the Intel SDM, volume 3, + * "Pre-defined Architectural Performance Events." + */ + +#define INST_RETIRED EVENT(0xc0, 0) + /* * This event list comprises Intel's eight architectural events plus * AMD's "retired branch instructions" for Zen[123] (and possibly @@ -61,7 +76,7 @@ */ static const uint64_t event_list[] = { EVENT(0x3c, 0), - EVENT(0xc0, 0), + INST_RETIRED, EVENT(0x3c, 1), EVENT(0x2e, 0x4f), EVENT(0x2e, 0x41), @@ -71,6 +86,16 @@ static const uint64_t event_list[] = { AMD_ZEN_BR_RETIRED, }; +struct perf_results { + union { + uint64_t raw; + struct { + uint64_t br_count:32; + uint64_t ir_count:32; + }; + }; +}; + /* * If we encounter a #GP during the guest PMU sanity check, then the guest * PMU is not functional. Inform the hypervisor via GUEST_SYNC(0). @@ -100,6 +125,24 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip) GUEST_SYNC(0); } +static uint64_t test_guest(uint32_t msr_base) +{ + struct perf_results r; + uint64_t br0, br1; + uint64_t ir0, ir1; + + br0 = rdmsr(msr_base + 0); + ir0 = rdmsr(msr_base + 1); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + br1 = rdmsr(msr_base + 0); + ir1 = rdmsr(msr_base + 1); + + r.br_count = br1 - br0; + r.ir_count = ir1 - ir0; + + return r.raw; +} + static void intel_guest_code(void) { check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1); @@ -108,16 +151,17 @@ static void intel_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t br0, br1; + uint64_t counts; wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1); - br0 = rdmsr(MSR_IA32_PMC0); - __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); - br1 = rdmsr(MSR_IA32_PMC0); - GUEST_SYNC(br1 - br0); + wrmsr(MSR_P6_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x3); + + counts = test_guest(MSR_IA32_PMC0); + GUEST_SYNC(counts); } } @@ -133,15 +177,16 @@ static void amd_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t br0, br1; + uint64_t counts; wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); - br0 = rdmsr(MSR_K7_PERFCTR0); - __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); - br1 = rdmsr(MSR_K7_PERFCTR0); - GUEST_SYNC(br1 - br0); + wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); + + counts = test_guest(MSR_K7_PERFCTR0); + GUEST_SYNC(counts); } } @@ -240,14 +285,39 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, return f; } +#define expect_success(r) __expect_success(r, __func__) + +static void __expect_success(struct perf_results r, const char *func) { + if (r.br_count != NUM_BRANCHES) + pr_info("%s: Branch instructions retired = %u (expected %u)\n", + func, r.br_count, NUM_BRANCHES); + + TEST_ASSERT(r.br_count, + "Allowed event, branch instructions retired, is not counting."); + TEST_ASSERT(r.ir_count, + "Allowed event, instructions retired, is not counting."); +} + +#define expect_failure(r) __expect_failure(r, __func__) + +static void __expect_failure(struct perf_results r, const char *func) { + if (r.br_count) + pr_info("%s: Branch instructions retired = %u (expected 0)\n", + func, r.br_count); + + TEST_ASSERT(!r.br_count, + "Disallowed PMU event, branch instructions retired, is counting"); + TEST_ASSERT(!r.ir_count, + "Disallowed PMU event, instructions retired, is counting"); +} + static void test_without_filter(struct kvm_vcpu *vcpu) { - uint64_t count = run_vcpu_to_sync(vcpu); + struct perf_results r; - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + r.raw = run_vcpu_to_sync(vcpu); + + expect_success(r); } static uint64_t test_with_filter(struct kvm_vcpu *vcpu, @@ -261,70 +331,63 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) { uint64_t event = EVENT(0x1C2, 0); struct kvm_pmu_event_filter *f; - uint64_t count; + struct perf_results r; f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY); - count = test_with_filter(vcpu, f); - + r.raw = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + expect_success(r); } static void test_member_deny_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); - uint64_t count = test_with_filter(vcpu, f); + struct perf_results r; + + r.raw = test_with_filter(vcpu, f); free(f); - if (count) - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", - __func__, count); - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); + + expect_failure(r); } static void test_member_allow_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); - uint64_t count = test_with_filter(vcpu, f); + struct perf_results r; + r.raw = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + expect_success(r); } static void test_not_member_deny_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); - uint64_t count; + struct perf_results r; + remove_event(f, INST_RETIRED); remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); - count = test_with_filter(vcpu, f); + r.raw = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + expect_success(r); } static void test_not_member_allow_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); - uint64_t count; + struct perf_results r; + remove_event(f, INST_RETIRED); remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); - count = test_with_filter(vcpu, f); - free(f); - if (count) - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", - __func__, count); - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); + r.raw = test_with_filter(vcpu, f); + + expect_failure(r); } /* -- 2.39.0.rc1.256.g54fd8350bd-goog ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] KVM: selftests: Test the PMU event "Instructions retired" 2022-12-09 19:49 ` [PATCH 2/2] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis @ 2023-01-04 17:35 ` Sean Christopherson 0 siblings, 0 replies; 7+ messages in thread From: Sean Christopherson @ 2023-01-04 17:35 UTC (permalink / raw) To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson On Fri, Dec 09, 2022, Aaron Lewis wrote: > Add testing for the event "Instructions retired" (0xc0) in the PMU > event filter on both Intel and AMD to ensure that the event doesn't > count when it is disallowed. Unlike most of the other events, the > event "Instructions retired", will be incremented by KVM when an > instruction is emulated. Test that this case is being properly handled > and that KVM doesn't increment the counter when that event is > disallowed. > > Signed-off-by: Aaron Lewis <aaronlewis@google.com> > --- > .../kvm/x86_64/pmu_event_filter_test.c | 157 ++++++++++++------ > 1 file changed, 110 insertions(+), 47 deletions(-) > > diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c > index 2de98fce7edd..81311af9522a 100644 > --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c > +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c > @@ -54,6 +54,21 @@ > > #define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0) > > + > +/* > + * "Retired instructions", from Processor Programming Reference > + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, > + * Preliminary Processor Programming Reference (PPR) for AMD Family > + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor > + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision > + * B1 Processors Volume 1 of 2. > + * --- and --- > + * "Instructions retired", from the Intel SDM, volume 3, > + * "Pre-defined Architectural Performance Events." > + */ > + > +#define INST_RETIRED EVENT(0xc0, 0) > + > /* > * This event list comprises Intel's eight architectural events plus > * AMD's "retired branch instructions" for Zen[123] (and possibly > @@ -61,7 +76,7 @@ > */ > static const uint64_t event_list[] = { > EVENT(0x3c, 0), > - EVENT(0xc0, 0), > + INST_RETIRED, There are multiple refactorings thrown into this single patch. Please break them out to their own prep patches, bundling everything together makes it way too hard to identify the actual functional change. > EVENT(0x3c, 1), > EVENT(0x2e, 0x4f), > EVENT(0x2e, 0x41), ... > @@ -240,14 +285,39 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, > return f; > } > > +#define expect_success(r) __expect_success(r, __func__) I'm all for macros, but in this case I think it's better to just have the callers pass in __func__ themselves. There's going to be copy+paste anyways, the few extra characters is a non-issue. Alternatively, make the inner helpers macros, though that'll be annoying to read and maintain. And somewhat of a nit, instead of "success" vs. "failure", what about "counting" vs. "not_counting"? And s/expect/assert? Without looking at the low level code, it wasn't clear to me what "failure" meant. E.g. assert_pmc_counting(r, __func__); assert_pmc_not_counting(r, __func__); > + > +static void __expect_success(struct perf_results r, const char *func) { Curly brace on its own line for functions. > + if (r.br_count != NUM_BRANCHES) > + pr_info("%s: Branch instructions retired = %u (expected %u)\n", > + func, r.br_count, NUM_BRANCHES); > + > + TEST_ASSERT(r.br_count, > + "Allowed event, branch instructions retired, is not counting."); > + TEST_ASSERT(r.ir_count, > + "Allowed event, instructions retired, is not counting."); > +} > + > +#define expect_failure(r) __expect_failure(r, __func__) > + > +static void __expect_failure(struct perf_results r, const char *func) { > + if (r.br_count) > + pr_info("%s: Branch instructions retired = %u (expected 0)\n", > + func, r.br_count); This pr_info() seems silly. If br_count is non-zero, the assert below will fire, no? > + > + TEST_ASSERT(!r.br_count, > + "Disallowed PMU event, branch instructions retired, is counting"); Either make these inner helpers macros so that the assert is guaranteed unique, or include the function name in the assert mesage. If __expect_{failure,success}() is NOT inlined, but the caller is, then it will be mildly annoying to determine exactly what test failed. > + TEST_ASSERT(!r.ir_count, > + "Disallowed PMU event, instructions retired, is counting"); > +} > + ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting 2022-12-09 19:49 [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Aaron Lewis 2022-12-09 19:49 ` [PATCH 1/2] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis 2022-12-09 19:49 ` [PATCH 2/2] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis @ 2022-12-12 13:24 ` Like Xu 2022-12-15 13:39 ` Aaron Lewis 2 siblings, 1 reply; 7+ messages in thread From: Like Xu @ 2022-12-12 13:24 UTC (permalink / raw) To: Aaron Lewis; +Cc: pbonzini, jmattson, kvm list On 10/12/2022 3:49 am, Aaron Lewis wrote: > Aaron Lewis (2): > KVM: x86/pmu: Prevent the PMU from counting disallowed events Nice and it blames to me, thanks. Would you share a detailed list of allowed and denied events (e.g. on ICX) so we can do more real world testing ? Ref: #define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300 > KVM: selftests: Test the PMU event "Instructions retired" And, do you have further plans to cover more pmu testcases via selftests ? ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting 2022-12-12 13:24 ` [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Like Xu @ 2022-12-15 13:39 ` Aaron Lewis 0 siblings, 0 replies; 7+ messages in thread From: Aaron Lewis @ 2022-12-15 13:39 UTC (permalink / raw) To: Like Xu; +Cc: pbonzini, jmattson, kvm list On Mon, Dec 12, 2022 at 5:24 AM Like Xu <like.xu.linux@gmail.com> wrote: > > On 10/12/2022 3:49 am, Aaron Lewis wrote: > > Aaron Lewis (2): > > KVM: x86/pmu: Prevent the PMU from counting disallowed events > > Nice and it blames to me, thanks. > > Would you share a detailed list of allowed and denied events (e.g. on ICX) > so we can do more real world testing ? > > Ref: #define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300 TBH, 300 entries is plenty for Intel. It's AMD that has the issue, but that's addressed with 'masked events'. What type of testing would you like to see for filtering or the PMU in general as a selftest? Currently, the test uses architectural events, but with this series we are really only testing 2 of them. There is plenty of room for more / better testing with just architectural events alone. I also add more testing and more variety of counters with masked events. I'm curious where you would like to see additional testing and what real world testing you'd like to see added. > > > KVM: selftests: Test the PMU event "Instructions retired" > > And, do you have further plans to cover more pmu testcases via selftests ? I don't have immediate plans beyond the 2 mentioned above, but I'm always open to more / better testing. ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-01-04 17:35 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2022-12-09 19:49 [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Aaron Lewis 2022-12-09 19:49 ` [PATCH 1/2] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis 2023-01-04 17:20 ` Sean Christopherson 2022-12-09 19:49 ` [PATCH 2/2] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis 2023-01-04 17:35 ` Sean Christopherson 2022-12-12 13:24 ` [PATCH 0/2] Fix "Instructions Retired" from incorrectly counting Like Xu 2022-12-15 13:39 ` Aaron Lewis
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox