* [PATCH v2 1/8] KVM: selftests: KVM: selftests: Add macros for fixed counters in processor.h
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
2023-06-28 19:46 ` Sean Christopherson
2023-05-30 13:42 ` [PATCH v2 2/8] KVM: selftests: Add pmu.h for PMU events and common masks Jinrong Liang
` (6 subsequent siblings)
7 siblings, 1 reply; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Jinrong Liang <cloudliang@tencent.com>
Add macro in processor.h, providing a efficient way to obtain
the number of fixed counters and fixed counters bit mask. The
addition of these macro will simplify the handling of fixed
performance counters, while keeping the code maintainable and
clean.
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
---
tools/testing/selftests/kvm/include/x86_64/processor.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index aa434c8f19c5..94751bddf1d9 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -240,6 +240,8 @@ struct kvm_x86_cpu_property {
#define X86_PROPERTY_PMU_VERSION KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 0, 7)
#define X86_PROPERTY_PMU_NR_GP_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 8, 15)
#define X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 24, 31)
+#define X86_PROPERTY_PMU_FIXED_CTRS_BITMASK KVM_X86_CPU_PROPERTY(0xa, 0, ECX, 0, 31)
+#define X86_PROPERTY_PMU_NR_FIXED_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4)
#define X86_PROPERTY_SUPPORTED_XCR0_LO KVM_X86_CPU_PROPERTY(0xd, 0, EAX, 0, 31)
#define X86_PROPERTY_XSTATE_MAX_SIZE_XCR0 KVM_X86_CPU_PROPERTY(0xd, 0, EBX, 0, 31)
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH v2 1/8] KVM: selftests: KVM: selftests: Add macros for fixed counters in processor.h
2023-05-30 13:42 ` [PATCH v2 1/8] KVM: selftests: KVM: selftests: Add macros for fixed counters in processor.h Jinrong Liang
@ 2023-06-28 19:46 ` Sean Christopherson
0 siblings, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2023-06-28 19:46 UTC (permalink / raw)
To: Jinrong Liang
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
Heh, duplicate "KVM: selftests:" in the shortlog.
On Tue, May 30, 2023, Jinrong Liang wrote:
> From: Jinrong Liang <cloudliang@tencent.com>
>
> Add macro in processor.h, providing a efficient way to obtain
Try not to describe what the patch literally does in terms of code, the purpose
of the shortlog+changelog is to complement the diff, e.g. it's super obvious from
the diff that this patch adds macros in processor.h.
> the number of fixed counters and fixed counters bit mask. The
Wrap closer to 75 chars, 60 is too aggressive.
> addition of these macro will simplify the handling of fixed
> performance counters, while keeping the code maintainable and
> clean.
Instead of making assertions, justify the patch by stating the effects on code.
Statements like "will simplify the handling" and "keeping the code maintainable
and clean" are subjective. In cases like these, it's extremely unlikely anyone
will disagree, but getting into the habit of providing concrete justification
even for simple cases makes it easier to write changelogs for more complex changes.
E.g.
Add x86 properties for the number of PMU fixed counters and the bitmask
that allows for "discontiguous" fixed counters so that tests don't have
to manually retrieve the correct CPUID leaf+register, and so that the
resulting code is self-documenting.
> Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
> ---
> tools/testing/selftests/kvm/include/x86_64/processor.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> index aa434c8f19c5..94751bddf1d9 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> @@ -240,6 +240,8 @@ struct kvm_x86_cpu_property {
> #define X86_PROPERTY_PMU_VERSION KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 0, 7)
> #define X86_PROPERTY_PMU_NR_GP_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 8, 15)
> #define X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 24, 31)
> +#define X86_PROPERTY_PMU_FIXED_CTRS_BITMASK KVM_X86_CPU_PROPERTY(0xa, 0, ECX, 0, 31)
Please spell out COUNTERS so that all the properties are consistent.
> +#define X86_PROPERTY_PMU_NR_FIXED_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4)
>
> #define X86_PROPERTY_SUPPORTED_XCR0_LO KVM_X86_CPU_PROPERTY(0xd, 0, EAX, 0, 31)
> #define X86_PROPERTY_XSTATE_MAX_SIZE_XCR0 KVM_X86_CPU_PROPERTY(0xd, 0, EBX, 0, 31)
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 2/8] KVM: selftests: Add pmu.h for PMU events and common masks
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
2023-05-30 13:42 ` [PATCH v2 1/8] KVM: selftests: KVM: selftests: Add macros for fixed counters in processor.h Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
2023-06-28 20:02 ` Sean Christopherson
2023-05-30 13:42 ` [PATCH v2 3/8] KVM: selftests: Test Intel PMU architectural events on gp counters Jinrong Liang
` (5 subsequent siblings)
7 siblings, 1 reply; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Jinrong Liang <cloudliang@tencent.com>
To introduce a new pmu.h header file under
tools/testing/selftests/kvm/include/x86_64 directory to better
organize the PMU performance event constants and common masks.
It will enhance the maintainability and readability of the KVM
selftests code.
In the new pmu.h header, to define the PMU performance events and
masks that are relevant for x86_64, allowing developers to easily
reference them and minimize potential errors in code that handles
these values.
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
---
.../selftests/kvm/include/x86_64/pmu.h | 56 +++++++++++++++++++
1 file changed, 56 insertions(+)
create mode 100644 tools/testing/selftests/kvm/include/x86_64/pmu.h
diff --git a/tools/testing/selftests/kvm/include/x86_64/pmu.h b/tools/testing/selftests/kvm/include/x86_64/pmu.h
new file mode 100644
index 000000000000..0e0111b11024
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/pmu.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * tools/testing/selftests/kvm/include/x86_64/pmu.h
+ *
+ * Copyright (C) 2023, Tencent, Inc.
+ */
+#ifndef _PMU_H_
+#define _PMU_H_
+
+#include "processor.h"
+
+#define GP_CTR_NUM_OFS_BIT 8
+#define EVT_LEN_OFS_BIT 24
+#define INTEL_PMC_IDX_FIXED 32
+
+#define PMU_CAP_FW_WRITES BIT_ULL(13)
+#define EVENTSEL_OS BIT_ULL(17)
+#define EVENTSEL_ANY BIT_ULL(21)
+#define EVENTSEL_EN BIT_ULL(22)
+#define RDPMC_FIXED_BASE BIT_ULL(30)
+
+#define PMU_VERSION_MASK GENMASK_ULL(7, 0)
+#define EVENTS_MASK GENMASK_ULL(7, 0)
+#define EVT_LEN_MASK GENMASK_ULL(31, EVT_LEN_OFS_BIT)
+#define GP_CTR_NUM_MASK GENMASK_ULL(15, GP_CTR_NUM_OFS_BIT)
+#define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0)
+
+#define X86_INTEL_PMU_VERSION kvm_cpu_property(X86_PROPERTY_PMU_VERSION)
+#define X86_INTEL_MAX_GP_CTR_NUM kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS)
+#define X86_INTEL_MAX_FIXED_CTR_NUM kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS)
+#define X86_INTEL_FIXED_CTRS_BITMASK kvm_cpu_property(X86_PROPERTY_PMU_FIXED_CTRS_BITMASK)
+
+/* Definitions for Architectural Performance Events */
+#define ARCH_EVENT(select, umask) (((select) & 0xff) | ((umask) & 0xff) << 8)
+
+/* Intel Pre-defined Architectural Performance Events */
+static const uint64_t arch_events[] = {
+ [0] = ARCH_EVENT(0x3c, 0x0),
+ [1] = ARCH_EVENT(0xc0, 0x0),
+ [2] = ARCH_EVENT(0x3c, 0x1),
+ [3] = ARCH_EVENT(0x2e, 0x4f),
+ [4] = ARCH_EVENT(0x2e, 0x41),
+ [5] = ARCH_EVENT(0xc4, 0x0),
+ [6] = ARCH_EVENT(0xc5, 0x0),
+ [7] = ARCH_EVENT(0xa4, 0x1),
+};
+
+/* Association of Fixed Counters with Architectural Performance Events */
+static int fixed_events[] = {1, 0, 7};
+
+static inline uint64_t evt_code_for_fixed_ctr(uint8_t idx)
+{
+ return arch_events[fixed_events[idx]];
+}
+
+#endif /* _PMU_H_ */
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH v2 2/8] KVM: selftests: Add pmu.h for PMU events and common masks
2023-05-30 13:42 ` [PATCH v2 2/8] KVM: selftests: Add pmu.h for PMU events and common masks Jinrong Liang
@ 2023-06-28 20:02 ` Sean Christopherson
0 siblings, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2023-06-28 20:02 UTC (permalink / raw)
To: Jinrong Liang
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
On Tue, May 30, 2023, Jinrong Liang wrote:
> From: Jinrong Liang <cloudliang@tencent.com>
>
> To introduce a new pmu.h header file under
> tools/testing/selftests/kvm/include/x86_64 directory to better
> organize the PMU performance event constants and common masks.
> It will enhance the maintainability and readability of the KVM
> selftests code.
>
> In the new pmu.h header, to define the PMU performance events and
> masks that are relevant for x86_64, allowing developers to easily
> reference them and minimize potential errors in code that handles
> these values.
Same feedback as the previous changelog.
> Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
> ---
> .../selftests/kvm/include/x86_64/pmu.h | 56 +++++++++++++++++++
> 1 file changed, 56 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/pmu.h
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/pmu.h b/tools/testing/selftests/kvm/include/x86_64/pmu.h
> new file mode 100644
> index 000000000000..0e0111b11024
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/pmu.h
> @@ -0,0 +1,56 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * tools/testing/selftests/kvm/include/x86_64/pmu.h
> + *
> + * Copyright (C) 2023, Tencent, Inc.
> + */
> +#ifndef _PMU_H_
> +#define _PMU_H_
SELFTEST_KVM_PMU_H for consistency, and to minimize the risk of a collision.
> +#include "processor.h"
> +
> +#define GP_CTR_NUM_OFS_BIT 8
> +#define EVT_LEN_OFS_BIT 24
Please spell out the words, I genuinely have no idea what these refer to, and
readers shouldn't have to consult the SDM just to understand a name.
> +#define INTEL_PMC_IDX_FIXED 32
> +
> +#define PMU_CAP_FW_WRITES BIT_ULL(13)
> +#define EVENTSEL_OS BIT_ULL(17)
> +#define EVENTSEL_ANY BIT_ULL(21)
> +#define EVENTSEL_EN BIT_ULL(22)
> +#define RDPMC_FIXED_BASE BIT_ULL(30)
> +
> +#define PMU_VERSION_MASK GENMASK_ULL(7, 0)
> +#define EVENTS_MASK GENMASK_ULL(7, 0)
> +#define EVT_LEN_MASK GENMASK_ULL(31, EVT_LEN_OFS_BIT)
> +#define GP_CTR_NUM_MASK GENMASK_ULL(15, GP_CTR_NUM_OFS_BIT)
> +#define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0)
> +
> +#define X86_INTEL_PMU_VERSION kvm_cpu_property(X86_PROPERTY_PMU_VERSION)
> +#define X86_INTEL_MAX_GP_CTR_NUM kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS)
> +#define X86_INTEL_MAX_FIXED_CTR_NUM kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS)
> +#define X86_INTEL_FIXED_CTRS_BITMASK kvm_cpu_property(X86_PROPERTY_PMU_FIXED_CTRS_BITMASK)
Please don't add macros like this. It gives the false impression that all these
values are constant at compile time, which is very much not the case. I really,
really dislike code that hides important details, like the fact that this is
querying KVM.
Yeah, the line lengths will be longer, but 80 chars is a soft limit, and we can
always get creative, e.g.
uint8_t max_pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION);
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
uint8_t version;
TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS) > 2);
> +/* Definitions for Architectural Performance Events */
> +#define ARCH_EVENT(select, umask) (((select) & 0xff) | ((umask) & 0xff) << 8)
> +
> +/* Intel Pre-defined Architectural Performance Events */
> +static const uint64_t arch_events[] = {
> + [0] = ARCH_EVENT(0x3c, 0x0),
> + [1] = ARCH_EVENT(0xc0, 0x0),
> + [2] = ARCH_EVENT(0x3c, 0x1),
> + [3] = ARCH_EVENT(0x2e, 0x4f),
> + [4] = ARCH_EVENT(0x2e, 0x41),
> + [5] = ARCH_EVENT(0xc4, 0x0),
> + [6] = ARCH_EVENT(0xc5, 0x0),
> + [7] = ARCH_EVENT(0xa4, 0x1),
Please do something like I proposed for KVM, i.e. avoid magic numbers inasmuch
as possible.
https://lore.kernel.org/all/20230607010206.1425277-2-seanjc@google.com
> +};
> +
> +/* Association of Fixed Counters with Architectural Performance Events */
> +static int fixed_events[] = {1, 0, 7};
> +
> +static inline uint64_t evt_code_for_fixed_ctr(uint8_t idx)
s/evt/event. Having consistent naming is more important than saving two characters.
> +{
> + return arch_events[fixed_events[idx]];
> +}
> +
> +#endif /* _PMU_H_ */
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 3/8] KVM: selftests: Test Intel PMU architectural events on gp counters
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
2023-05-30 13:42 ` [PATCH v2 1/8] KVM: selftests: KVM: selftests: Add macros for fixed counters in processor.h Jinrong Liang
2023-05-30 13:42 ` [PATCH v2 2/8] KVM: selftests: Add pmu.h for PMU events and common masks Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
2023-06-28 20:43 ` Sean Christopherson
2023-05-30 13:42 ` [PATCH v2 4/8] KVM: selftests: Test Intel PMU architectural events on fixed counters Jinrong Liang
` (4 subsequent siblings)
7 siblings, 1 reply; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Like Xu <likexu@tencent.com>
Add test cases to check if different Architectural events are available
after it's marked as unavailable via CPUID. It covers vPMU event filtering
logic based on Intel CPUID, which is a complement to pmu_event_filter.
According to Intel SDM, the number of architectural events is reported
through CPUID.0AH:EAX[31:24] and the architectural event x is
supported if EBX[x]=0 && EAX[31:24]>x.
Co-developed-by: Jinrong Liang <cloudliang@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../kvm/x86_64/pmu_basic_functionality_test.c | 168 ++++++++++++++++++
2 files changed, 169 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 18cadc669798..f636968709c4 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -78,6 +78,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
TEST_GEN_PROGS_x86_64 += x86_64/monitor_mwait_test
TEST_GEN_PROGS_x86_64 += x86_64/nested_exceptions_test
TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
+TEST_GEN_PROGS_x86_64 += x86_64/pmu_basic_functionality_test
TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
new file mode 100644
index 000000000000..1f100fd94d67
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Test the consistency of the PMU's CPUID and its features
+ *
+ * Copyright (C) 2023, Tencent, Inc.
+ *
+ * Check that the VM's PMU behaviour is consistent with the
+ * VM CPUID definition.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <x86intrin.h>
+
+#include "pmu.h"
+
+/* Guest payload for any performance counter counting */
+#define NUM_BRANCHES 10
+
+static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
+ void *guest_code)
+{
+ struct kvm_vm *vm;
+
+ vm = vm_create_with_one_vcpu(vcpu, guest_code);
+ vm_init_descriptor_tables(vm);
+ vcpu_init_descriptor_tables(*vcpu);
+
+ return vm;
+}
+
+static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg)
+{
+ struct ucall uc;
+
+ vcpu_run(vcpu);
+ switch (get_ucall(vcpu, &uc)) {
+ case UCALL_SYNC:
+ *ucall_arg = uc.args[1];
+ break;
+ case UCALL_DONE:
+ break;
+ default:
+ TEST_ASSERT(false, "Unexpected exit: %s",
+ exit_reason_str(vcpu->run->exit_reason));
+ }
+ return uc.cmd;
+}
+
+static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num,
+ uint32_t ctr_base_msr, uint64_t evt_code)
+{
+ uint32_t global_msr = MSR_CORE_PERF_GLOBAL_CTRL;
+ unsigned int i;
+
+ for (i = 0; i < max_gp_num; i++) {
+ wrmsr(ctr_base_msr + i, 0);
+ wrmsr(MSR_P6_EVNTSEL0 + i, EVENTSEL_OS | EVENTSEL_EN | evt_code);
+ if (version > 1)
+ wrmsr(global_msr, BIT_ULL(i));
+
+ __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
+
+ if (version > 1)
+ wrmsr(global_msr, 0);
+
+ GUEST_SYNC(_rdpmc(i));
+ }
+
+ GUEST_DONE();
+}
+
+static void test_arch_events_cpuid(struct kvm_vcpu *vcpu, uint8_t evt_vector,
+ uint8_t unavl_mask, uint8_t idx)
+{
+ struct kvm_cpuid_entry2 *entry;
+ uint32_t ctr_msr = MSR_IA32_PERFCTR0;
+ bool is_supported;
+ uint64_t counter_val = 0;
+
+ entry = vcpu_get_cpuid_entry(vcpu, 0xa);
+ entry->eax = (entry->eax & ~EVT_LEN_MASK) |
+ (evt_vector << EVT_LEN_OFS_BIT);
+ entry->ebx = (entry->ebx & ~EVENTS_MASK) | unavl_mask;
+ vcpu_set_cpuid(vcpu);
+
+ if (vcpu_get_msr(vcpu, MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES)
+ ctr_msr = MSR_IA32_PMC0;
+
+ /* Arch event x is supported if EBX[x]=0 && EAX[31:24]>x */
+ is_supported = !(entry->ebx & BIT_ULL(idx)) &&
+ (((entry->eax & EVT_LEN_MASK) >> EVT_LEN_OFS_BIT) > idx);
+
+ vcpu_args_set(vcpu, 4, X86_INTEL_PMU_VERSION, X86_INTEL_MAX_GP_CTR_NUM,
+ ctr_msr, arch_events[idx]);
+
+ while (run_vcpu(vcpu, &counter_val) != UCALL_DONE)
+ TEST_ASSERT(is_supported == !!counter_val,
+ "Unavailable arch event is counting.");
+}
+
+static void intel_check_arch_event_is_unavl(uint8_t idx)
+{
+ uint8_t eax_evt_vec, ebx_unavl_mask, i, j;
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ /*
+ * A brute force iteration of all combinations of values is likely to
+ * exhaust the limit of the single-threaded thread fd nums, so it's
+ * tested here by iterating through all valid values on a single bit.
+ */
+ for (i = 0; i < ARRAY_SIZE(arch_events); i++) {
+ eax_evt_vec = BIT_ULL(i);
+ for (j = 0; j < ARRAY_SIZE(arch_events); j++) {
+ ebx_unavl_mask = BIT_ULL(j);
+ vm = pmu_vm_create_with_one_vcpu(&vcpu,
+ intel_guest_run_arch_event);
+ test_arch_events_cpuid(vcpu, eax_evt_vec,
+ ebx_unavl_mask, idx);
+
+ kvm_vm_free(vm);
+ }
+ }
+}
+
+static void intel_test_arch_events(void)
+{
+ uint8_t idx;
+
+ for (idx = 0; idx < ARRAY_SIZE(arch_events); idx++) {
+ /*
+ * Given the stability of performance event recurrence,
+ * only these arch events are currently being tested:
+ *
+ * - Core cycle event (idx = 0)
+ * - Instruction retired event (idx = 1)
+ * - Reference cycles event (idx = 2)
+ * - Branch instruction retired event (idx = 5)
+ *
+ * Note that reference cycles is one event that actually cannot
+ * be successfully virtualized.
+ */
+ if (idx > 2 && idx != 5)
+ continue;
+
+ intel_check_arch_event_is_unavl(idx);
+ }
+}
+
+static void intel_test_pmu_cpuid(void)
+{
+ intel_test_arch_events();
+}
+
+int main(int argc, char *argv[])
+{
+ TEST_REQUIRE(get_kvm_param_bool("enable_pmu"));
+
+ if (host_cpu_is_intel) {
+ TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION));
+ TEST_REQUIRE(X86_INTEL_PMU_VERSION > 0);
+ TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM));
+
+ intel_test_pmu_cpuid();
+ }
+
+ return 0;
+}
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH v2 3/8] KVM: selftests: Test Intel PMU architectural events on gp counters
2023-05-30 13:42 ` [PATCH v2 3/8] KVM: selftests: Test Intel PMU architectural events on gp counters Jinrong Liang
@ 2023-06-28 20:43 ` Sean Christopherson
2023-06-28 21:03 ` Jim Mattson
0 siblings, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2023-06-28 20:43 UTC (permalink / raw)
To: Jinrong Liang
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
On Tue, May 30, 2023, Jinrong Liang wrote:
> +/* Guest payload for any performance counter counting */
> +#define NUM_BRANCHES 10
> +
> +static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> + void *guest_code)
> +{
> + struct kvm_vm *vm;
> +
> + vm = vm_create_with_one_vcpu(vcpu, guest_code);
> + vm_init_descriptor_tables(vm);
> + vcpu_init_descriptor_tables(*vcpu);
> +
> + return vm;
> +}
> +
> +static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg)
> +{
> + struct ucall uc;
> +
> + vcpu_run(vcpu);
> + switch (get_ucall(vcpu, &uc)) {
> + case UCALL_SYNC:
> + *ucall_arg = uc.args[1];
> + break;
> + case UCALL_DONE:
> + break;
> + default:
> + TEST_ASSERT(false, "Unexpected exit: %s",
> + exit_reason_str(vcpu->run->exit_reason));
TEST_FAIL()
> + }
> + return uc.cmd;
> +}
> +
> +static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num,
Unless I'm mistaken, this isn't specific to arch events. And with a bit of
massaging, it doesn't need to be Intel specific. Typically we try to avoid
speculatively creating infrastructure, but in this case we *know* AMD has vPMU
support, and we *know* from KVM-Unit-Tests that accounting for the differences
between MSRs on Intel vs. AMD is doable, so we should write code with an eye
toward supporting both AMD and Intel.
And then we can avoid having to prefix so many functions with "intel", e.g. this
can be something like
static void guest_measure_loop()
or whatever.
> + uint32_t ctr_base_msr, uint64_t evt_code)
> +{
> + uint32_t global_msr = MSR_CORE_PERF_GLOBAL_CTRL;
> + unsigned int i;
> +
> + for (i = 0; i < max_gp_num; i++) {
> + wrmsr(ctr_base_msr + i, 0);
> + wrmsr(MSR_P6_EVNTSEL0 + i, EVENTSEL_OS | EVENTSEL_EN | evt_code);
> + if (version > 1)
> + wrmsr(global_msr, BIT_ULL(i));
> +
> + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
> +
> + if (version > 1)
> + wrmsr(global_msr, 0);
> +
> + GUEST_SYNC(_rdpmc(i));
> + }
> +
> + GUEST_DONE();
> +}
> +
> +static void test_arch_events_cpuid(struct kvm_vcpu *vcpu, uint8_t evt_vector,
"vector" is confusing, as "vector" usually refers to a vector number, e.g. for
IRQs and exceptions. This is the _length_ of a so called vector. I vote to ignore
the SDM's use of "vector" in this case and instead call it something like
arch_events_bitmap_size. And then arch_events_unavailable_mask?
> + uint8_t unavl_mask, uint8_t idx)
> +{
> + struct kvm_cpuid_entry2 *entry;
> + uint32_t ctr_msr = MSR_IA32_PERFCTR0;
> + bool is_supported;
> + uint64_t counter_val = 0;
> +
> + entry = vcpu_get_cpuid_entry(vcpu, 0xa);
> + entry->eax = (entry->eax & ~EVT_LEN_MASK) |
> + (evt_vector << EVT_LEN_OFS_BIT);
EVT_LEN_OFS_BIT can be a KVM_x86_PROPERTY. And please also add a helper to set
properties, the whole point of the FEATURE and PROPERTY frameworks is to avoid
open coding CPUID manipulations. E.g.
static inline void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu,
struct kvm_x86_cpu_property property,
uint32_t value)
{
...
}
> + entry->ebx = (entry->ebx & ~EVENTS_MASK) | unavl_mask;
> + vcpu_set_cpuid(vcpu);
> +
> + if (vcpu_get_msr(vcpu, MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES)
> + ctr_msr = MSR_IA32_PMC0;
This can be done in the guest, no?
> +
> + /* Arch event x is supported if EBX[x]=0 && EAX[31:24]>x */
> + is_supported = !(entry->ebx & BIT_ULL(idx)) &&
> + (((entry->eax & EVT_LEN_MASK) >> EVT_LEN_OFS_BIT) > idx);
Please add a helper for this.
> +
> + vcpu_args_set(vcpu, 4, X86_INTEL_PMU_VERSION, X86_INTEL_MAX_GP_CTR_NUM,
> + ctr_msr, arch_events[idx]);
> +
> + while (run_vcpu(vcpu, &counter_val) != UCALL_DONE)
> + TEST_ASSERT(is_supported == !!counter_val,
> + "Unavailable arch event is counting.");
> +}
> +
> +static void intel_check_arch_event_is_unavl(uint8_t idx)
> +{
> + uint8_t eax_evt_vec, ebx_unavl_mask, i, j;
> + struct kvm_vcpu *vcpu;
> + struct kvm_vm *vm;
> +
> + /*
> + * A brute force iteration of all combinations of values is likely to
> + * exhaust the limit of the single-threaded thread fd nums, so it's
> + * tested here by iterating through all valid values on a single bit.
> + */
> + for (i = 0; i < ARRAY_SIZE(arch_events); i++) {
> + eax_evt_vec = BIT_ULL(i);
> + for (j = 0; j < ARRAY_SIZE(arch_events); j++) {
> + ebx_unavl_mask = BIT_ULL(j);
> + vm = pmu_vm_create_with_one_vcpu(&vcpu,
> + intel_guest_run_arch_event);
> + test_arch_events_cpuid(vcpu, eax_evt_vec,
> + ebx_unavl_mask, idx);
> +
> + kvm_vm_free(vm);
> + }
> + }
> +}
> +
> +static void intel_test_arch_events(void)
> +{
> + uint8_t idx;
> +
> + for (idx = 0; idx < ARRAY_SIZE(arch_events); idx++) {
> + /*
> + * Given the stability of performance event recurrence,
> + * only these arch events are currently being tested:
> + *
> + * - Core cycle event (idx = 0)
> + * - Instruction retired event (idx = 1)
> + * - Reference cycles event (idx = 2)
> + * - Branch instruction retired event (idx = 5)
> + *
> + * Note that reference cycles is one event that actually cannot
> + * be successfully virtualized.
> + */
> + if (idx > 2 && idx != 5)
As request in a previous patch, use enums, then the need to document the magic
numbers goes away.
> + continue;
> +
> + intel_check_arch_event_is_unavl(idx);
> + }
> +}
> +
> +static void intel_test_pmu_cpuid(void)
> +{
> + intel_test_arch_events();
Either put the Intel-specific TEST_REQUIRE()s in here, or open code the calls.
Adding a helper and then splitting code across the helper and its sole caller is
unnecessary.
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + TEST_REQUIRE(get_kvm_param_bool("enable_pmu"));
> +
> + if (host_cpu_is_intel) {
Presumably AMD will be supported at some point, but until then, this needs to be
TEST_REQUIRE(host_cpu_is_intel);
> + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION));
> + TEST_REQUIRE(X86_INTEL_PMU_VERSION > 0);
> + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM));
> +
> + intel_test_pmu_cpuid();
> + }
> +
> + return 0;
> +}
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH v2 3/8] KVM: selftests: Test Intel PMU architectural events on gp counters
2023-06-28 20:43 ` Sean Christopherson
@ 2023-06-28 21:03 ` Jim Mattson
0 siblings, 0 replies; 15+ messages in thread
From: Jim Mattson @ 2023-06-28 21:03 UTC (permalink / raw)
To: Sean Christopherson
Cc: Jinrong Liang, Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
On Wed, Jun 28, 2023 at 1:44 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, May 30, 2023, Jinrong Liang wrote:
> > +/* Guest payload for any performance counter counting */
> > +#define NUM_BRANCHES 10
> > +
> > +static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> > + void *guest_code)
> > +{
> > + struct kvm_vm *vm;
> > +
> > + vm = vm_create_with_one_vcpu(vcpu, guest_code);
> > + vm_init_descriptor_tables(vm);
> > + vcpu_init_descriptor_tables(*vcpu);
> > +
> > + return vm;
> > +}
> > +
> > +static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg)
> > +{
> > + struct ucall uc;
> > +
> > + vcpu_run(vcpu);
> > + switch (get_ucall(vcpu, &uc)) {
> > + case UCALL_SYNC:
> > + *ucall_arg = uc.args[1];
> > + break;
> > + case UCALL_DONE:
> > + break;
> > + default:
> > + TEST_ASSERT(false, "Unexpected exit: %s",
> > + exit_reason_str(vcpu->run->exit_reason));
>
> TEST_FAIL()
>
> > + }
> > + return uc.cmd;
> > +}
> > +
> > +static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num,
>
> Unless I'm mistaken, this isn't specific to arch events. And with a bit of
> massaging, it doesn't need to be Intel specific. Typically we try to avoid
> speculatively creating infrastructure, but in this case we *know* AMD has vPMU
> support, and we *know* from KVM-Unit-Tests that accounting for the differences
> between MSRs on Intel vs. AMD is doable, so we should write code with an eye
> toward supporting both AMD and Intel.
>
> And then we can avoid having to prefix so many functions with "intel", e.g. this
> can be something like
>
> static void guest_measure_loop()
>
> or whatever.
>
> > + uint32_t ctr_base_msr, uint64_t evt_code)
> > +{
> > + uint32_t global_msr = MSR_CORE_PERF_GLOBAL_CTRL;
> > + unsigned int i;
> > +
> > + for (i = 0; i < max_gp_num; i++) {
> > + wrmsr(ctr_base_msr + i, 0);
> > + wrmsr(MSR_P6_EVNTSEL0 + i, EVENTSEL_OS | EVENTSEL_EN | evt_code);
> > + if (version > 1)
> > + wrmsr(global_msr, BIT_ULL(i));
> > +
> > + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
> > +
> > + if (version > 1)
> > + wrmsr(global_msr, 0);
> > +
> > + GUEST_SYNC(_rdpmc(i));
> > + }
> > +
> > + GUEST_DONE();
> > +}
> > +
> > +static void test_arch_events_cpuid(struct kvm_vcpu *vcpu, uint8_t evt_vector,
>
> "vector" is confusing, as "vector" usually refers to a vector number, e.g. for
> IRQs and exceptions. This is the _length_ of a so called vector. I vote to ignore
> the SDM's use of "vector" in this case and instead call it something like
> arch_events_bitmap_size. And then arch_events_unavailable_mask?
>
> > + uint8_t unavl_mask, uint8_t idx)
> > +{
> > + struct kvm_cpuid_entry2 *entry;
> > + uint32_t ctr_msr = MSR_IA32_PERFCTR0;
> > + bool is_supported;
> > + uint64_t counter_val = 0;
> > +
> > + entry = vcpu_get_cpuid_entry(vcpu, 0xa);
> > + entry->eax = (entry->eax & ~EVT_LEN_MASK) |
> > + (evt_vector << EVT_LEN_OFS_BIT);
>
> EVT_LEN_OFS_BIT can be a KVM_x86_PROPERTY. And please also add a helper to set
> properties, the whole point of the FEATURE and PROPERTY frameworks is to avoid
> open coding CPUID manipulations. E.g.
>
> static inline void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu,
> struct kvm_x86_cpu_property property,
> uint32_t value)
> {
> ...
> }
>
> > + entry->ebx = (entry->ebx & ~EVENTS_MASK) | unavl_mask;
> > + vcpu_set_cpuid(vcpu);
> > +
> > + if (vcpu_get_msr(vcpu, MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES)
> > + ctr_msr = MSR_IA32_PMC0;
>
> This can be done in the guest, no?
>
> > +
> > + /* Arch event x is supported if EBX[x]=0 && EAX[31:24]>x */
> > + is_supported = !(entry->ebx & BIT_ULL(idx)) &&
> > + (((entry->eax & EVT_LEN_MASK) >> EVT_LEN_OFS_BIT) > idx);
>
> Please add a helper for this.
>
> > +
> > + vcpu_args_set(vcpu, 4, X86_INTEL_PMU_VERSION, X86_INTEL_MAX_GP_CTR_NUM,
> > + ctr_msr, arch_events[idx]);
> > +
> > + while (run_vcpu(vcpu, &counter_val) != UCALL_DONE)
> > + TEST_ASSERT(is_supported == !!counter_val,
> > + "Unavailable arch event is counting.");
> > +}
> > +
> > +static void intel_check_arch_event_is_unavl(uint8_t idx)
> > +{
> > + uint8_t eax_evt_vec, ebx_unavl_mask, i, j;
> > + struct kvm_vcpu *vcpu;
> > + struct kvm_vm *vm;
> > +
> > + /*
> > + * A brute force iteration of all combinations of values is likely to
> > + * exhaust the limit of the single-threaded thread fd nums, so it's
> > + * tested here by iterating through all valid values on a single bit.
> > + */
> > + for (i = 0; i < ARRAY_SIZE(arch_events); i++) {
> > + eax_evt_vec = BIT_ULL(i);
> > + for (j = 0; j < ARRAY_SIZE(arch_events); j++) {
> > + ebx_unavl_mask = BIT_ULL(j);
> > + vm = pmu_vm_create_with_one_vcpu(&vcpu,
> > + intel_guest_run_arch_event);
> > + test_arch_events_cpuid(vcpu, eax_evt_vec,
> > + ebx_unavl_mask, idx);
> > +
> > + kvm_vm_free(vm);
> > + }
> > + }
> > +}
> > +
> > +static void intel_test_arch_events(void)
> > +{
> > + uint8_t idx;
> > +
> > + for (idx = 0; idx < ARRAY_SIZE(arch_events); idx++) {
> > + /*
> > + * Given the stability of performance event recurrence,
> > + * only these arch events are currently being tested:
> > + *
> > + * - Core cycle event (idx = 0)
> > + * - Instruction retired event (idx = 1)
> > + * - Reference cycles event (idx = 2)
> > + * - Branch instruction retired event (idx = 5)
> > + *
> > + * Note that reference cycles is one event that actually cannot
> > + * be successfully virtualized.
> > + */
Actually, there is no reason that reference cycles can't be
successfully virtualized. It just can't be done with the current vPMU
infrastructure.
> > + if (idx > 2 && idx != 5)
>
> As request in a previous patch, use enums, then the need to document the magic
> numbers goes away.
>
> > + continue;
> > +
> > + intel_check_arch_event_is_unavl(idx);
> > + }
> > +}
> > +
> > +static void intel_test_pmu_cpuid(void)
> > +{
> > + intel_test_arch_events();
>
> Either put the Intel-specific TEST_REQUIRE()s in here, or open code the calls.
> Adding a helper and then splitting code across the helper and its sole caller is
> unnecessary.
>
> > +}
> > +
> > +int main(int argc, char *argv[])
> > +{
> > + TEST_REQUIRE(get_kvm_param_bool("enable_pmu"));
> > +
> > + if (host_cpu_is_intel) {
>
> Presumably AMD will be supported at some point, but until then, this needs to be
>
> TEST_REQUIRE(host_cpu_is_intel);
>
> > + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION));
> > + TEST_REQUIRE(X86_INTEL_PMU_VERSION > 0);
> > + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM));
> > +
> > + intel_test_pmu_cpuid();
> > + }
> > +
> > + return 0;
> > +}
> > --
> > 2.31.1
> >
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 4/8] KVM: selftests: Test Intel PMU architectural events on fixed counters
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
` (2 preceding siblings ...)
2023-05-30 13:42 ` [PATCH v2 3/8] KVM: selftests: Test Intel PMU architectural events on gp counters Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
2023-05-30 13:42 ` [PATCH v2 5/8] KVM: selftests: Test consistency of CPUID with num of gp counters Jinrong Liang
` (3 subsequent siblings)
7 siblings, 0 replies; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Jinrong Liang <cloudliang@tencent.com>
Update test to cover Intel PMU architectural events on fixed counters.
Per Intel SDM, PMU users can also count architecture performance events
on fixed counters (specifically, FIXED_CTR0 for the retired instructions
and FIXED_CTR1 for cpu core cycles event). Therefore, if guest's CPUID
indicates that an architecture event is not available, the corresponding
fixed counter will also not count that event.
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
---
.../kvm/x86_64/pmu_basic_functionality_test.c | 28 +++++++++++++++++--
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
index 1f100fd94d67..81029d05367a 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
@@ -47,7 +47,8 @@ static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg)
}
static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num,
- uint32_t ctr_base_msr, uint64_t evt_code)
+ uint32_t ctr_base_msr, uint64_t evt_code,
+ uint8_t max_fixed_num)
{
uint32_t global_msr = MSR_CORE_PERF_GLOBAL_CTRL;
unsigned int i;
@@ -66,6 +67,27 @@ static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num,
GUEST_SYNC(_rdpmc(i));
}
+ /* No need to test independent arch events on fixed counters. */
+ if (version <= 1 || max_fixed_num <= 1)
+ goto done;
+
+ if (evt_code == evt_code_for_fixed_ctr(0))
+ i = 0;
+ else if (evt_code == evt_code_for_fixed_ctr(1))
+ i = 1;
+ else
+ goto done;
+
+ wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0);
+ wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i));
+ wrmsr(global_msr, BIT_ULL(INTEL_PMC_IDX_FIXED + i));
+
+ __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
+
+ wrmsr(global_msr, 0);
+ GUEST_SYNC(_rdpmc(RDPMC_FIXED_BASE | i));
+
+done:
GUEST_DONE();
}
@@ -90,8 +112,8 @@ static void test_arch_events_cpuid(struct kvm_vcpu *vcpu, uint8_t evt_vector,
is_supported = !(entry->ebx & BIT_ULL(idx)) &&
(((entry->eax & EVT_LEN_MASK) >> EVT_LEN_OFS_BIT) > idx);
- vcpu_args_set(vcpu, 4, X86_INTEL_PMU_VERSION, X86_INTEL_MAX_GP_CTR_NUM,
- ctr_msr, arch_events[idx]);
+ vcpu_args_set(vcpu, 5, X86_INTEL_PMU_VERSION, X86_INTEL_MAX_GP_CTR_NUM,
+ ctr_msr, arch_events[idx], X86_INTEL_MAX_FIXED_CTR_NUM);
while (run_vcpu(vcpu, &counter_val) != UCALL_DONE)
TEST_ASSERT(is_supported == !!counter_val,
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH v2 5/8] KVM: selftests: Test consistency of CPUID with num of gp counters
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
` (3 preceding siblings ...)
2023-05-30 13:42 ` [PATCH v2 4/8] KVM: selftests: Test Intel PMU architectural events on fixed counters Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
2023-05-30 13:42 ` [PATCH v2 6/8] KVM: selftests: Test consistency of CPUID with num of fixed counters Jinrong Liang
` (2 subsequent siblings)
7 siblings, 0 replies; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Like Xu <likexu@tencent.com>
Add test to check if non-existent counters can be accessed in guest after
determining the number of Intel generic performance counters by CPUID.
When the num of counters is less than 3, KVM does not emulate #GP if
a counter isn't present due to compatibility MSR_P6_PERFCTRx handling.
Nor will the KVM emulate more counters than it can support.
Co-developed-by: Jinrong Liang <cloudliang@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
---
.../kvm/x86_64/pmu_basic_functionality_test.c | 88 +++++++++++++++++++
1 file changed, 88 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
index 81029d05367a..116437ac2095 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
@@ -16,6 +16,17 @@
/* Guest payload for any performance counter counting */
#define NUM_BRANCHES 10
+/*
+ * KVM implements the first two non-existent counters (MSR_P6_PERFCTRx)
+ * via kvm_pr_unimpl_wrmsr() instead of #GP.
+ */
+#define MSR_INTEL_ARCH_PMU_GPCTR (MSR_IA32_PERFCTR0 + 2)
+
+static const uint64_t perf_caps[] = {
+ 0,
+ PMU_CAP_FW_WRITES,
+};
+
static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
void *guest_code)
{
@@ -169,9 +180,86 @@ static void intel_test_arch_events(void)
}
}
+static void guest_wr_and_rd_msrs(uint32_t base, uint64_t value,
+ uint8_t begin, uint8_t offset)
+{
+ unsigned int i;
+ uint8_t wr_vector, rd_vector;
+ uint64_t msr_val;
+
+ for (i = begin; i < begin + offset; i++) {
+ wr_vector = wrmsr_safe(base + i, value);
+ rd_vector = rdmsr_safe(base + i, &msr_val);
+ if (wr_vector == GP_VECTOR || rd_vector == GP_VECTOR)
+ GUEST_SYNC(GP_VECTOR);
+ else
+ GUEST_SYNC(msr_val);
+ }
+
+ GUEST_DONE();
+}
+
+/* Access the first out-of-range counter register to trigger #GP */
+static void test_oob_gp_counter(uint8_t eax_gp_num, uint8_t offset,
+ uint64_t perf_cap, uint64_t exported)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+ struct kvm_cpuid_entry2 *entry;
+ uint32_t ctr_msr = MSR_IA32_PERFCTR0;
+ uint64_t msr_val;
+
+ vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_wr_and_rd_msrs);
+
+ entry = vcpu_get_cpuid_entry(vcpu, 0xa);
+ entry->eax = (entry->eax & ~GP_CTR_NUM_MASK) |
+ (eax_gp_num << GP_CTR_NUM_OFS_BIT);
+ vcpu_set_cpuid(vcpu);
+
+ if (perf_cap & PMU_CAP_FW_WRITES)
+ ctr_msr = MSR_IA32_PMC0;
+
+ vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, perf_cap);
+ vcpu_args_set(vcpu, 4, ctr_msr, 0xffff, eax_gp_num, offset);
+ while (run_vcpu(vcpu, &msr_val) != UCALL_DONE)
+ TEST_ASSERT(msr_val == exported,
+ "Unexpected when testing gp counter num.");
+
+ kvm_vm_free(vm);
+}
+
+static void intel_test_counters_num(void)
+{
+ unsigned int i;
+ uint8_t kvm_gp_num = X86_INTEL_MAX_GP_CTR_NUM;
+
+ TEST_REQUIRE(kvm_gp_num > 2);
+
+ for (i = 0; i < ARRAY_SIZE(perf_caps); i++) {
+ /*
+ * For compatibility reasons, KVM does not emulate #GP
+ * when MSR_P6_PERFCTR[0|1] is not present, but it doesn't
+ * affect checking the presence of MSR_IA32_PMCx with #GP.
+ */
+ if (perf_caps[i] & PMU_CAP_FW_WRITES)
+ test_oob_gp_counter(0, 1, perf_caps[i], GP_VECTOR);
+
+ test_oob_gp_counter(2, 1, perf_caps[i], GP_VECTOR);
+ test_oob_gp_counter(kvm_gp_num, 1, perf_caps[i], GP_VECTOR);
+
+ /* KVM doesn't emulate more counters than it can support. */
+ test_oob_gp_counter(kvm_gp_num + 1, 1, perf_caps[i], GP_VECTOR);
+
+ /* Test that KVM drops writes to MSR_P6_PERFCTR[0|1]. */
+ if (perf_caps[i] == 0)
+ test_oob_gp_counter(0, 2, perf_caps[i], 0);
+ }
+}
+
static void intel_test_pmu_cpuid(void)
{
intel_test_arch_events();
+ intel_test_counters_num();
}
int main(int argc, char *argv[])
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH v2 6/8] KVM: selftests: Test consistency of CPUID with num of fixed counters
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
` (4 preceding siblings ...)
2023-05-30 13:42 ` [PATCH v2 5/8] KVM: selftests: Test consistency of CPUID with num of gp counters Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
2023-06-28 21:01 ` Sean Christopherson
2023-05-30 13:42 ` [PATCH v2 7/8] KVM: selftests: Test Intel supported fixed counters bit mask Jinrong Liang
2023-05-30 13:42 ` [PATCH v2 8/8] KVM: selftests: Test consistency of PMU MSRs with Intel PMU version Jinrong Liang
7 siblings, 1 reply; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Jinrong Liang <cloudliang@tencent.com>
Add test to check if non-existent counters can be accessed in guest after
determining the number of Intel generic performance counters by CPUID.
Per SDM, fixed-function performance counter 'i' is supported if ECX[i] ||
(EDX[4:0] > i). KVM doesn't emulate more counters than it can support.
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
---
.../kvm/x86_64/pmu_basic_functionality_test.c | 42 +++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
index 116437ac2095..e19f8c2774c5 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
@@ -228,10 +228,46 @@ static void test_oob_gp_counter(uint8_t eax_gp_num, uint8_t offset,
kvm_vm_free(vm);
}
+static void intel_test_oob_fixed_ctr(uint8_t edx_fix_num,
+ uint32_t fixed_bitmask, uint64_t expected)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+ struct kvm_cpuid_entry2 *entry;
+ uint8_t idx = edx_fix_num;
+ bool visible;
+ uint64_t msr_val;
+
+ vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_wr_and_rd_msrs);
+
+ entry = vcpu_get_cpuid_entry(vcpu, 0xa);
+ entry->ecx = fixed_bitmask;
+ entry->edx = (entry->edx & ~FIXED_CTR_NUM_MASK) | edx_fix_num;
+ vcpu_set_cpuid(vcpu);
+
+ /* Per Intel SDM, FxCtr[i]_is_supported := ECX[i] || (EDX[4:0] > i). */
+ visible = (entry->ecx & BIT_ULL(idx) ||
+ ((entry->edx & FIXED_CTR_NUM_MASK) > idx));
+
+ /* KVM doesn't emulate more fixed counters than it can support. */
+ if (idx >= X86_INTEL_MAX_FIXED_CTR_NUM)
+ visible = false;
+
+ vcpu_args_set(vcpu, 4, MSR_CORE_PERF_FIXED_CTR0, 0xffff, idx, 1);
+ if (!visible)
+ while (run_vcpu(vcpu, &msr_val) != UCALL_DONE)
+ TEST_ASSERT(msr_val == expected,
+ "Unexpected when testing fixed counter num.");
+
+ kvm_vm_free(vm);
+}
+
static void intel_test_counters_num(void)
{
unsigned int i;
+ uint32_t ecx;
uint8_t kvm_gp_num = X86_INTEL_MAX_GP_CTR_NUM;
+ uint8_t kvm_fixed_num = X86_INTEL_MAX_FIXED_CTR_NUM;
TEST_REQUIRE(kvm_gp_num > 2);
@@ -254,6 +290,12 @@ static void intel_test_counters_num(void)
if (perf_caps[i] == 0)
test_oob_gp_counter(0, 2, perf_caps[i], 0);
}
+
+ for (ecx = 0; ecx <= X86_INTEL_FIXED_CTRS_BITMASK + 1; ecx++) {
+ intel_test_oob_fixed_ctr(0, ecx, GP_VECTOR);
+ intel_test_oob_fixed_ctr(kvm_fixed_num, ecx, GP_VECTOR);
+ intel_test_oob_fixed_ctr(kvm_fixed_num + 1, ecx, GP_VECTOR);
+ }
}
static void intel_test_pmu_cpuid(void)
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH v2 6/8] KVM: selftests: Test consistency of CPUID with num of fixed counters
2023-05-30 13:42 ` [PATCH v2 6/8] KVM: selftests: Test consistency of CPUID with num of fixed counters Jinrong Liang
@ 2023-06-28 21:01 ` Sean Christopherson
0 siblings, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2023-06-28 21:01 UTC (permalink / raw)
To: Jinrong Liang
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
On Tue, May 30, 2023, Jinrong Liang wrote:
> From: Jinrong Liang <cloudliang@tencent.com>
>
> Add test to check if non-existent counters can be accessed in guest after
> determining the number of Intel generic performance counters by CPUID.
> Per SDM, fixed-function performance counter 'i' is supported if ECX[i] ||
> (EDX[4:0] > i). KVM doesn't emulate more counters than it can support.
>
> Co-developed-by: Like Xu <likexu@tencent.com>
> Signed-off-by: Like Xu <likexu@tencent.com>
> Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
> ---
> .../kvm/x86_64/pmu_basic_functionality_test.c | 42 +++++++++++++++++++
> 1 file changed, 42 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
> index 116437ac2095..e19f8c2774c5 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
> @@ -228,10 +228,46 @@ static void test_oob_gp_counter(uint8_t eax_gp_num, uint8_t offset,
> kvm_vm_free(vm);
> }
>
> +static void intel_test_oob_fixed_ctr(uint8_t edx_fix_num,
> + uint32_t fixed_bitmask, uint64_t expected)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> + struct kvm_cpuid_entry2 *entry;
> + uint8_t idx = edx_fix_num;
> + bool visible;
> + uint64_t msr_val;
> +
> + vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_wr_and_rd_msrs);
> +
> + entry = vcpu_get_cpuid_entry(vcpu, 0xa);
> + entry->ecx = fixed_bitmask;
> + entry->edx = (entry->edx & ~FIXED_CTR_NUM_MASK) | edx_fix_num;
> + vcpu_set_cpuid(vcpu);
> +
> + /* Per Intel SDM, FxCtr[i]_is_supported := ECX[i] || (EDX[4:0] > i). */
> + visible = (entry->ecx & BIT_ULL(idx) ||
> + ((entry->edx & FIXED_CTR_NUM_MASK) > idx));
Add a helper (in pmu.h) for this one too.
> + /* KVM doesn't emulate more fixed counters than it can support. */
> + if (idx >= X86_INTEL_MAX_FIXED_CTR_NUM)
> + visible = false;
> +
> + vcpu_args_set(vcpu, 4, MSR_CORE_PERF_FIXED_CTR0, 0xffff, idx, 1);
> + if (!visible)
Curly braces need around a multi-line statement.
> + while (run_vcpu(vcpu, &msr_val) != UCALL_DONE)
> + TEST_ASSERT(msr_val == expected,
> + "Unexpected when testing fixed counter num.");
ASSERT_EQ() will print the expected versus actually for you.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 7/8] KVM: selftests: Test Intel supported fixed counters bit mask
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
` (5 preceding siblings ...)
2023-05-30 13:42 ` [PATCH v2 6/8] KVM: selftests: Test consistency of CPUID with num of fixed counters Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
2023-06-28 21:05 ` Sean Christopherson
2023-05-30 13:42 ` [PATCH v2 8/8] KVM: selftests: Test consistency of PMU MSRs with Intel PMU version Jinrong Liang
7 siblings, 1 reply; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Like Xu <likexu@tencent.com>
Add a test to check that fixed counters enabled via guest
CPUID.0xA.ECX (instead of EDX[04:00]) work as normal as usual.
Co-developed-by: Jinrong Liang <cloudliang@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
---
.../kvm/x86_64/pmu_basic_functionality_test.c | 71 +++++++++++++++++++
1 file changed, 71 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
index e19f8c2774c5..108cfe254095 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
@@ -298,10 +298,81 @@ static void intel_test_counters_num(void)
}
}
+static void intel_guest_run_fixed_counters(uint64_t supported_bitmask,
+ uint8_t max_fixed_num)
+{
+ unsigned int i;
+ uint64_t msr_val;
+
+ for (i = 0; i < max_fixed_num; i++) {
+ if (!(supported_bitmask & BIT_ULL(i)))
+ continue;
+
+ if (wrmsr_safe(MSR_CORE_PERF_FIXED_CTR0 + i, 0) == GP_VECTOR)
+ GUEST_SYNC(GP_VECTOR);
+
+ wrmsr_safe(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i));
+ wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(INTEL_PMC_IDX_FIXED + i));
+ __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
+ wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+ rdmsr_safe(MSR_CORE_PERF_FIXED_CTR0 + i, &msr_val);
+
+ GUEST_SYNC(msr_val);
+ }
+
+ GUEST_DONE();
+}
+
+static void test_fixed_counters_setup(struct kvm_vcpu *vcpu, uint8_t edx_fix_num,
+ uint32_t fixed_bitmask, bool expected)
+{
+ struct kvm_cpuid_entry2 *entry;
+ uint8_t max_fixed_num = X86_INTEL_MAX_FIXED_CTR_NUM;
+ uint64_t supported_bitmask = 0;
+ uint64_t msr_val;
+ unsigned int i;
+
+ entry = vcpu_get_cpuid_entry(vcpu, 0xa);
+ entry->ecx = fixed_bitmask;
+ entry->edx = (entry->edx & ~FIXED_CTR_NUM_MASK) | edx_fix_num;
+ vcpu_set_cpuid(vcpu);
+
+ for (i = 0; i < max_fixed_num; i++) {
+ if (entry->ecx & BIT_ULL(i) ||
+ ((entry->edx & FIXED_CTR_NUM_MASK) > i))
+ supported_bitmask |= BIT_ULL(i);
+ }
+
+ vcpu_args_set(vcpu, 2, supported_bitmask, max_fixed_num);
+
+ while (run_vcpu(vcpu, &msr_val) != UCALL_DONE)
+ TEST_ASSERT(!!msr_val == expected,
+ "Unexpected when testing fixed counter.");
+}
+
+static void intel_test_fixed_counters(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+ uint32_t ecx;
+ uint8_t edx, num = X86_INTEL_MAX_FIXED_CTR_NUM;
+
+ for (edx = 0; edx <= num; edx++) {
+ /* KVM doesn't emulate more fixed counters than it can support. */
+ for (ecx = 0; ecx <= (BIT_ULL(num) - 1); ecx++) {
+ vm = pmu_vm_create_with_one_vcpu(&vcpu,
+ intel_guest_run_fixed_counters);
+ test_fixed_counters_setup(vcpu, edx, ecx, true);
+ kvm_vm_free(vm);
+ }
+ }
+}
+
static void intel_test_pmu_cpuid(void)
{
intel_test_arch_events();
intel_test_counters_num();
+ intel_test_fixed_counters();
}
int main(int argc, char *argv[])
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH v2 7/8] KVM: selftests: Test Intel supported fixed counters bit mask
2023-05-30 13:42 ` [PATCH v2 7/8] KVM: selftests: Test Intel supported fixed counters bit mask Jinrong Liang
@ 2023-06-28 21:05 ` Sean Christopherson
0 siblings, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2023-06-28 21:05 UTC (permalink / raw)
To: Jinrong Liang
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
On Tue, May 30, 2023, Jinrong Liang wrote:
> +static void test_fixed_counters_setup(struct kvm_vcpu *vcpu, uint8_t edx_fix_num,
> + uint32_t fixed_bitmask, bool expected)
> +{
> + struct kvm_cpuid_entry2 *entry;
> + uint8_t max_fixed_num = X86_INTEL_MAX_FIXED_CTR_NUM;
> + uint64_t supported_bitmask = 0;
> + uint64_t msr_val;
> + unsigned int i;
> +
> + entry = vcpu_get_cpuid_entry(vcpu, 0xa);
> + entry->ecx = fixed_bitmask;
> + entry->edx = (entry->edx & ~FIXED_CTR_NUM_MASK) | edx_fix_num;
> + vcpu_set_cpuid(vcpu);
> +
> + for (i = 0; i < max_fixed_num; i++) {
> + if (entry->ecx & BIT_ULL(i) ||
> + ((entry->edx & FIXED_CTR_NUM_MASK) > i))
> + supported_bitmask |= BIT_ULL(i);
> + }
> +
> + vcpu_args_set(vcpu, 2, supported_bitmask, max_fixed_num);
All of this can be queried from the guest, no? Then you also verify that KVM is
passing in the correct CPUID info too.
> + while (run_vcpu(vcpu, &msr_val) != UCALL_DONE)
> + TEST_ASSERT(!!msr_val == expected,
> + "Unexpected when testing fixed counter.");
ASSERT_EQ()
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 8/8] KVM: selftests: Test consistency of PMU MSRs with Intel PMU version
2023-05-30 13:42 [PATCH v2 0/8] KVM: selftests: Test the consistency of the PMU's CPUID and its features Jinrong Liang
` (6 preceding siblings ...)
2023-05-30 13:42 ` [PATCH v2 7/8] KVM: selftests: Test Intel supported fixed counters bit mask Jinrong Liang
@ 2023-05-30 13:42 ` Jinrong Liang
7 siblings, 0 replies; 15+ messages in thread
From: Jinrong Liang @ 2023-05-30 13:42 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Like Xu, David Matlack, Aaron Lewis,
Vitaly Kuznetsov, Wanpeng Li, Jinrong Liang, kvm, linux-kernel
From: Jinrong Liang <cloudliang@tencent.com>
KVM user sapce may control the Intel guest PMU version number via
CPUID.0AH:EAX[07:00]. A test is added to check if a typical PMU register
that is not available at the current version number is leaking.
Co-developed-by: Like Xu <likexu@tencent.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Jinrong Liang <cloudliang@tencent.com>
---
.../kvm/x86_64/pmu_basic_functionality_test.c | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
index 108cfe254095..7da3eaf9ab5a 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c
@@ -368,11 +368,75 @@ static void intel_test_fixed_counters(void)
}
}
+static void intel_guest_check_pmu_version(uint8_t version)
+{
+ switch (version) {
+ case 0:
+ GUEST_SYNC(wrmsr_safe(MSR_INTEL_ARCH_PMU_GPCTR, 0xffffull));
+ case 1:
+ GUEST_SYNC(wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, 0x1ull));
+ case 2:
+ /*
+ * AnyThread Bit is only supported in version 3
+ *
+ * The strange thing is that when version=0, writing ANY-Any
+ * Thread bit (bit 21) in MSR_P6_EVNTSEL0 and MSR_P6_EVNTSEL1
+ * will not generate #GP. While writing ANY-Any Thread bit
+ * (bit 21) in MSR_P6_EVNTSEL0+x (MAX_GP_CTR_NUM > x > 2) to
+ * ANY-Any Thread bit (bit 21) will generate #GP.
+ */
+ if (version == 0)
+ break;
+
+ GUEST_SYNC(wrmsr_safe(MSR_P6_EVNTSEL0, EVENTSEL_ANY));
+ break;
+ default:
+ /* KVM currently supports up to pmu version 2 */
+ GUEST_SYNC(GP_VECTOR);
+ }
+
+ GUEST_DONE();
+}
+
+static void test_pmu_version_setup(struct kvm_vcpu *vcpu, uint8_t version,
+ uint64_t expected)
+{
+ struct kvm_cpuid_entry2 *entry;
+ uint64_t msr_val;
+
+ entry = vcpu_get_cpuid_entry(vcpu, 0xa);
+ entry->eax = (entry->eax & ~PMU_VERSION_MASK) | version;
+ vcpu_set_cpuid(vcpu);
+
+ vcpu_args_set(vcpu, 1, version);
+ while (run_vcpu(vcpu, &msr_val) != UCALL_DONE) {
+ TEST_ASSERT(msr_val == expected,
+ "Something beyond this PMU version is leaked.");
+ }
+}
+
+static void intel_test_pmu_version(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+ uint8_t version, unsupported_version = X86_INTEL_PMU_VERSION + 1;
+
+ TEST_REQUIRE(X86_INTEL_MAX_FIXED_CTR_NUM > 2);
+
+ for (version = 0; version <= unsupported_version; version++) {
+ vm = pmu_vm_create_with_one_vcpu(&vcpu,
+ intel_guest_check_pmu_version);
+ test_pmu_version_setup(vcpu, version, GP_VECTOR);
+ kvm_vm_free(vm);
+ }
+}
+
static void intel_test_pmu_cpuid(void)
{
intel_test_arch_events();
intel_test_counters_num();
intel_test_fixed_counters();
+ intel_test_pmu_version();
}
int main(int argc, char *argv[])
--
2.31.1
^ permalink raw reply related [flat|nested] 15+ messages in thread