linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 0/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test
@ 2024-06-19  8:31 Shaoqin Huang
  2024-06-19  8:31 ` [PATCH v10 1/3] KVM: selftests: aarch64: Add helper function for the vpmu vcpu creation Shaoqin Huang
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Shaoqin Huang @ 2024-06-19  8:31 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier, kvmarm
  Cc: Shaoqin Huang, James Morse, kvm, linux-arm-kernel, linux-kernel,
	linux-kselftest, Paolo Bonzini, Shuah Khan, Suzuki K Poulose,
	Zenghui Yu

The test is inspired by the pmu_event_filter_test which implemented by x86. On
the arm64 platform, there is the same ability to set the pmu_event_filter
through the KVM_ARM_VCPU_PMU_V3_FILTER attribute. So add the test for arm64.

The series first create the helper function which can be used
for the vpmu related tests. Then, it implement the test.

Changelog:
----------
v9->v10:
  - Remove the first_filter checking in the prepare_expected_pmce function.
  - Add a new macro EVENT_[ALLOW|DENY] to make the definition of filter more
  readable.
  - Some small improvements.

v8->v9:
  - Rebased to latest kvm-arm/next.

v7->v8:
  - Rebased to kvm-arm/next.
  - Deleted the GIC layout related staff.
  - Fixed the checking logic in the kvm_pmu_support_events.

v6->v7:
  - Rebased to v6.9-rc3.

v5->v6:
  - Rebased to v6.9-rc1.
  - Collect RB.
  - Add multiple filter test.

v4->v5:
  - Rebased to v6.8-rc6.
  - Refactor the helper function, make it fine-grained and easy to be used.
  - Namimg improvements.
  - Use the kvm_device_attr_set() helper.
  - Make the test descriptor array readable and clean.
  - Delete the patch which moves the pmu related helper to vpmu.h.
  - Remove the kvm_supports_pmu_event_filter() function since nobody will run
  this on a old kernel.

v3->v4:
  - Rebased to the v6.8-rc2.

v2->v3:
  - Check the pmceid in guest code instead of pmu event count since different
  hardware may have different event count result, check pmceid makes it stable
  on different platform.                        [Eric]
  - Some typo fixed and commit message improved.

v1->v2:
  - Improve the commit message.                 [Eric]
  - Fix the bug in [enable|disable]_counter.    [Raghavendra & Marc]
  - Add the check if kvm has attr KVM_ARM_VCPU_PMU_V3_FILTER.
  - Add if host pmu support the test event throught pmceid0.
  - Split the test_invalid_filter() to another patch. [Eric]

Shaoqin Huang (3):
  KVM: selftests: aarch64: Add helper function for the vpmu vcpu
    creation
  KVM: selftests: aarch64: Introduce pmu_event_filter_test
  KVM: selftests: aarch64: Add invalid filter test in
    pmu_event_filter_test

 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/pmu_event_filter_test.c       | 352 ++++++++++++++++++
 .../kvm/aarch64/vpmu_counter_access.c         |  32 +-
 .../selftests/kvm/include/aarch64/vpmu.h      |  28 ++
 4 files changed, 387 insertions(+), 26 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
 create mode 100644 tools/testing/selftests/kvm/include/aarch64/vpmu.h

-- 
2.40.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v10 1/3] KVM: selftests: aarch64: Add helper function for the vpmu vcpu creation
  2024-06-19  8:31 [PATCH v10 0/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
@ 2024-06-19  8:31 ` Shaoqin Huang
  2024-06-20 22:30   ` Raghavendra Rao Ananta
  2024-06-19  8:31 ` [PATCH v10 2/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
  2024-06-19  8:31 ` [PATCH v10 3/3] KVM: selftests: aarch64: Add invalid filter test in pmu_event_filter_test Shaoqin Huang
  2 siblings, 1 reply; 6+ messages in thread
From: Shaoqin Huang @ 2024-06-19  8:31 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier, kvmarm
  Cc: Shaoqin Huang, Eric Auger, James Morse, Suzuki K Poulose,
	Zenghui Yu, Paolo Bonzini, Shuah Khan, linux-kernel,
	linux-arm-kernel, kvm, linux-kselftest

Create a vcpu with vpmu would be a common requirement for the vpmu test,
so add the helper function for the vpmu vcpu creation. And use those
helper function in the vpmu_counter_access.c test.

Use this chance to delete the meaningless ASSERT about the pmuver,
because KVM does not advertise an IMP_DEF PMU to guests.

No functional changes intended.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Shaoqin Huang <shahuang@redhat.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 32 ++++---------------
 .../selftests/kvm/include/aarch64/vpmu.h      | 28 ++++++++++++++++
 2 files changed, 34 insertions(+), 26 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/aarch64/vpmu.h

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index d31b9f64ba14..68da44198719 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -16,6 +16,7 @@
 #include <processor.h>
 #include <test_util.h>
 #include <vgic.h>
+#include <vpmu.h>
 #include <perf/arm_pmuv3.h>
 #include <linux/bitfield.h>
 
@@ -407,18 +408,8 @@ static void guest_code(uint64_t expected_pmcr_n)
 /* Create a VM that has one vCPU with PMUv3 configured. */
 static void create_vpmu_vm(void *guest_code)
 {
-	struct kvm_vcpu_init init;
-	uint8_t pmuver, ec;
-	uint64_t dfr0, irq = 23;
-	struct kvm_device_attr irq_attr = {
-		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
-		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
-		.addr = (uint64_t)&irq,
-	};
-	struct kvm_device_attr init_attr = {
-		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
-		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
-	};
+	uint8_t ec;
+	uint64_t irq = 23;
 
 	/* The test creates the vpmu_vm multiple times. Ensure a clean state */
 	memset(&vpmu_vm, 0, sizeof(vpmu_vm));
@@ -430,26 +421,15 @@ static void create_vpmu_vm(void *guest_code)
 					guest_sync_handler);
 	}
 
-	/* Create vCPU with PMUv3 */
-	vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
-	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
-	vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
+	vpmu_vm.vcpu = vm_vcpu_add_with_vpmu(vpmu_vm.vm, 0, guest_code);
 	vcpu_init_descriptor_tables(vpmu_vm.vcpu);
 	vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64);
 	__TEST_REQUIRE(vpmu_vm.gic_fd >= 0,
 		       "Failed to create vgic-v3, skipping");
 
-	/* Make sure that PMUv3 support is indicated in the ID register */
-	vcpu_get_reg(vpmu_vm.vcpu,
-		     KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
-	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0);
-	TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF &&
-		    pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP,
-		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
-
 	/* Initialize vPMU */
-	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
-	vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+	vpmu_set_irq(vpmu_vm.vcpu, irq);
+	vpmu_init(vpmu_vm.vcpu);
 }
 
 static void destroy_vpmu_vm(void)
diff --git a/tools/testing/selftests/kvm/include/aarch64/vpmu.h b/tools/testing/selftests/kvm/include/aarch64/vpmu.h
new file mode 100644
index 000000000000..5ef6cb011e41
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/aarch64/vpmu.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <kvm_util.h>
+
+static inline struct kvm_vcpu *vm_vcpu_add_with_vpmu(struct kvm_vm *vm,
+						     uint32_t vcpu_id,
+						     void *guest_code)
+{
+	struct kvm_vcpu_init init;
+
+	/* Create vCPU with PMUv3 */
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+
+	return aarch64_vcpu_add(vm, 0, &init, guest_code);
+}
+
+static void vpmu_set_irq(struct kvm_vcpu *vcpu, int irq)
+{
+	kvm_device_attr_set(vcpu->fd, KVM_ARM_VCPU_PMU_V3_CTRL,
+			    KVM_ARM_VCPU_PMU_V3_IRQ, &irq);
+}
+
+static void vpmu_init(struct kvm_vcpu *vcpu)
+{
+	kvm_device_attr_set(vcpu->fd, KVM_ARM_VCPU_PMU_V3_CTRL,
+			    KVM_ARM_VCPU_PMU_V3_INIT, NULL);
+}
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v10 2/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test
  2024-06-19  8:31 [PATCH v10 0/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
  2024-06-19  8:31 ` [PATCH v10 1/3] KVM: selftests: aarch64: Add helper function for the vpmu vcpu creation Shaoqin Huang
@ 2024-06-19  8:31 ` Shaoqin Huang
  2024-06-20 22:26   ` Raghavendra Rao Ananta
  2024-06-19  8:31 ` [PATCH v10 3/3] KVM: selftests: aarch64: Add invalid filter test in pmu_event_filter_test Shaoqin Huang
  2 siblings, 1 reply; 6+ messages in thread
From: Shaoqin Huang @ 2024-06-19  8:31 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier, kvmarm
  Cc: Shaoqin Huang, Paolo Bonzini, Shuah Khan, James Morse,
	Suzuki K Poulose, Zenghui Yu, linux-kernel, kvm, linux-kselftest,
	linux-arm-kernel

Introduce pmu_event_filter_test for arm64 platforms. The test configures
PMUv3 for a vCPU, and sets different pmu event filters for the vCPU, and
check if the guest can see those events which user allow and can't use
those events which use deny.

This test refactor the create_vpmu_vm() and make it a wrapper for
__create_vpmu_vm(), which allows some extra init code before
KVM_ARM_VCPU_PMU_V3_INIT.

And this test use the KVM_ARM_VCPU_PMU_V3_FILTER attribute to set the
pmu event filter in KVM. And choose to filter two common event
branches_retired and instructions_retired, and let the guest to check if
it see the right pmceid register.

Signed-off-by: Shaoqin Huang <shahuang@redhat.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/pmu_event_filter_test.c       | 314 ++++++++++++++++++
 2 files changed, 315 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index ac280dcba996..2110b49e7a84 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -153,6 +153,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/aarch32_id_regs
 TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
 TEST_GEN_PROGS_aarch64 += aarch64/hypercalls
 TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test
+TEST_GEN_PROGS_aarch64 += aarch64/pmu_event_filter_test
 TEST_GEN_PROGS_aarch64 += aarch64/psci_test
 TEST_GEN_PROGS_aarch64 += aarch64/set_id_regs
 TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
diff --git a/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
new file mode 100644
index 000000000000..308b8677e08e
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
@@ -0,0 +1,314 @@
+
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * pmu_event_filter_test - Test user limit pmu event for guest.
+ *
+ * Copyright (c) 2023 Red Hat, Inc.
+ *
+ * This test checks if the guest only see the limited pmu event that userspace
+ * sets, if the guest can use those events which user allow, and if the guest
+ * can't use those events which user deny.
+ * This test runs only when KVM_CAP_ARM_PMU_V3, KVM_ARM_VCPU_PMU_V3_FILTER
+ * is supported on the host.
+ */
+#include <kvm_util.h>
+#include <processor.h>
+#include <vgic.h>
+#include <vpmu.h>
+#include <test_util.h>
+#include <perf/arm_pmuv3.h>
+
+struct pmu_common_event_ids {
+	uint64_t pmceid0;
+	uint64_t pmceid1;
+} max_pmce, expected_pmce;
+
+struct vpmu_vm {
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+};
+
+static struct vpmu_vm vpmu_vm;
+
+#define FILTER_NR 10
+
+struct test_desc {
+	const char *name;
+	struct kvm_pmu_event_filter filter[FILTER_NR];
+};
+
+#define __DEFINE_FILTER(base, num, act)		\
+	((struct kvm_pmu_event_filter) {	\
+		.base_event	= base,		\
+		.nevents	= num,		\
+		.action		= act,		\
+	})
+
+#define DEFINE_FILTER(base, act) __DEFINE_FILTER(base, 1, act)
+
+#define EVENT_ALLOW(event)	DEFINE_FILTER(event, KVM_PMU_EVENT_ALLOW)
+#define EVENT_DENY(event)	DEFINE_FILTER(event, KVM_PMU_EVENT_DENY)
+
+static void guest_code(void)
+{
+	uint64_t pmceid0 = read_sysreg(pmceid0_el0);
+	uint64_t pmceid1 = read_sysreg(pmceid1_el0);
+
+	GUEST_ASSERT_EQ(expected_pmce.pmceid0, pmceid0);
+	GUEST_ASSERT_EQ(expected_pmce.pmceid1, pmceid1);
+
+	GUEST_DONE();
+}
+
+static void guest_get_pmceid(void)
+{
+	max_pmce.pmceid0 = read_sysreg(pmceid0_el0);
+	max_pmce.pmceid1 = read_sysreg(pmceid1_el0);
+
+	GUEST_DONE();
+}
+
+static void run_vcpu(struct kvm_vcpu *vcpu)
+{
+	struct ucall uc;
+
+	while (1) {
+		vcpu_run(vcpu);
+		switch (get_ucall(vcpu, &uc)) {
+		case UCALL_DONE:
+			return;
+		case UCALL_ABORT:
+			REPORT_GUEST_ASSERT(uc);
+			break;
+		default:
+			TEST_FAIL("Unknown ucall %lu", uc.cmd);
+		}
+	}
+}
+
+static void set_pmce(struct pmu_common_event_ids *pmce, int action, int event)
+{
+	int base = 0;
+	uint64_t *pmceid = NULL;
+
+	if (event >= 0x4000) {
+		event -= 0x4000;
+		base = 32;
+	}
+
+	if (event >= 0 && event <= 0x1F) {
+		pmceid = &pmce->pmceid0;
+	} else if (event >= 0x20 && event <= 0x3F) {
+		event -= 0x20;
+		pmceid = &pmce->pmceid1;
+	} else {
+		return;
+	}
+
+	event += base;
+	if (action == KVM_PMU_EVENT_ALLOW)
+		*pmceid |= BIT(event);
+	else
+		*pmceid &= ~BIT(event);
+}
+
+static inline bool is_valid_filter(struct kvm_pmu_event_filter *filter)
+{
+	return filter && filter->nevents != 0;
+}
+
+static void prepare_expected_pmce(struct kvm_pmu_event_filter *filter)
+{
+	struct pmu_common_event_ids pmce_mask = { ~0, ~0 };
+	int i;
+
+	if (is_valid_filter(filter) && filter->action == KVM_PMU_EVENT_ALLOW)
+		memset(&pmce_mask, 0, sizeof(pmce_mask));
+
+	while (is_valid_filter(filter)) {
+		for (i = 0; i < filter->nevents; i++)
+			set_pmce(&pmce_mask, filter->action,
+				 filter->base_event + i);
+		filter++;
+	}
+
+	expected_pmce.pmceid0 = max_pmce.pmceid0 & pmce_mask.pmceid0;
+	expected_pmce.pmceid1 = max_pmce.pmceid1 & pmce_mask.pmceid1;
+}
+
+static void pmu_event_filter_init(struct kvm_pmu_event_filter *filter)
+{
+	while (is_valid_filter(filter)) {
+		kvm_device_attr_set(vpmu_vm.vcpu->fd,
+				    KVM_ARM_VCPU_PMU_V3_CTRL,
+				    KVM_ARM_VCPU_PMU_V3_FILTER,
+				    filter);
+		filter++;
+	}
+}
+
+/* Create a VM that has one vCPU with PMUv3 configured. */
+static void create_vpmu_vm_with_filter(void *guest_code,
+				       struct kvm_pmu_event_filter *filter)
+{
+	uint64_t irq = 23;
+
+	/* The test creates the vpmu_vm multiple times. Ensure a clean state */
+	memset(&vpmu_vm, 0, sizeof(vpmu_vm));
+
+	vpmu_vm.vm = vm_create(1);
+	vpmu_vm.vcpu = vm_vcpu_add_with_vpmu(vpmu_vm.vm, 0, guest_code);
+	vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64);
+	__TEST_REQUIRE(vpmu_vm.gic_fd >= 0,
+		       "Failed to create vgic-v3, skipping");
+
+	pmu_event_filter_init(filter);
+
+	/* Initialize vPMU */
+	vpmu_set_irq(vpmu_vm.vcpu, irq);
+	vpmu_init(vpmu_vm.vcpu);
+}
+
+static void create_vpmu_vm(void *guest_code)
+{
+	create_vpmu_vm_with_filter(guest_code, NULL);
+}
+
+static void destroy_vpmu_vm(void)
+{
+	close(vpmu_vm.gic_fd);
+	kvm_vm_free(vpmu_vm.vm);
+}
+
+static void run_test(struct test_desc *t)
+{
+	pr_info("Test: %s\n", t->name);
+
+	create_vpmu_vm_with_filter(guest_code, t->filter);
+	prepare_expected_pmce(t->filter);
+	sync_global_to_guest(vpmu_vm.vm, expected_pmce);
+
+	run_vcpu(vpmu_vm.vcpu);
+
+	destroy_vpmu_vm();
+}
+
+static struct test_desc tests[] = {
+	{
+		.name = "without_filter",
+		.filter = {
+			{ 0 }
+		},
+	},
+	{
+		.name = "member_allow_filter",
+		.filter = {
+			EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_SW_INCR),
+			EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+			EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_BR_RETIRED),
+			{ 0 },
+		},
+	},
+	{
+		.name = "member_deny_filter",
+		.filter = {
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_SW_INCR),
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_BR_RETIRED),
+			{ 0 },
+		},
+	},
+	{
+		.name = "not_member_deny_filter",
+		.filter = {
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_SW_INCR),
+			{ 0 },
+		},
+	},
+	{
+		.name = "not_member_allow_filter",
+		.filter = {
+			EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_SW_INCR),
+			{ 0 },
+		},
+	},
+	{
+		.name = "deny_chain_filter",
+		.filter = {
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_CHAIN),
+			{ 0 },
+		},
+	},
+	{
+		.name = "deny_cpu_cycles_filter",
+		.filter = {
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+			{ 0 },
+		},
+	},
+	{
+		.name = "cancel_allow_filter",
+		.filter = {
+			EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		},
+	},
+	{
+		.name = "cancel_deny_filter",
+		.filter = {
+			EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+			EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		},
+	},
+	{
+		.name = "multiple_filter",
+		.filter = {
+			__DEFINE_FILTER(0x0, 0x10, KVM_PMU_EVENT_ALLOW),
+			__DEFINE_FILTER(0x6, 0x3, KVM_PMU_EVENT_DENY),
+		},
+	},
+	{ 0 }
+};
+
+static void run_tests(void)
+{
+	struct test_desc *t;
+
+	for (t = &tests[0]; t->name; t++)
+		run_test(t);
+}
+
+static int used_pmu_events[] = {
+       ARMV8_PMUV3_PERFCTR_BR_RETIRED,
+       ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+       ARMV8_PMUV3_PERFCTR_CHAIN,
+       ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+};
+
+static bool kvm_pmu_support_events(void)
+{
+	struct pmu_common_event_ids used_pmce = { 0, 0 };
+
+	create_vpmu_vm(guest_get_pmceid);
+
+	memset(&max_pmce, 0, sizeof(max_pmce));
+	sync_global_to_guest(vpmu_vm.vm, max_pmce);
+	run_vcpu(vpmu_vm.vcpu);
+	sync_global_from_guest(vpmu_vm.vm, max_pmce);
+	destroy_vpmu_vm();
+
+	for (int i = 0; i < ARRAY_SIZE(used_pmu_events); i++)
+		set_pmce(&used_pmce, KVM_PMU_EVENT_ALLOW, used_pmu_events[i]);
+
+	return ((max_pmce.pmceid0 & used_pmce.pmceid0) == used_pmce.pmceid0) &&
+	       ((max_pmce.pmceid1 & used_pmce.pmceid1) == used_pmce.pmceid1);
+}
+
+int main(void)
+{
+	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
+	TEST_REQUIRE(kvm_pmu_support_events());
+
+	run_tests();
+}
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v10 3/3] KVM: selftests: aarch64: Add invalid filter test in pmu_event_filter_test
  2024-06-19  8:31 [PATCH v10 0/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
  2024-06-19  8:31 ` [PATCH v10 1/3] KVM: selftests: aarch64: Add helper function for the vpmu vcpu creation Shaoqin Huang
  2024-06-19  8:31 ` [PATCH v10 2/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
@ 2024-06-19  8:31 ` Shaoqin Huang
  2 siblings, 0 replies; 6+ messages in thread
From: Shaoqin Huang @ 2024-06-19  8:31 UTC (permalink / raw)
  To: Oliver Upton, Marc Zyngier, kvmarm
  Cc: Shaoqin Huang, Raghavendra Rao Ananta, Eric Auger, James Morse,
	Suzuki K Poulose, Zenghui Yu, Paolo Bonzini, Shuah Khan,
	linux-arm-kernel, kvm, linux-kselftest, linux-kernel

Add the invalid filter test which sets the filter beyond the event
space and sets the invalid action to double check if the
KVM_ARM_VCPU_PMU_V3_FILTER will return the expected error.

Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Shaoqin Huang <shahuang@redhat.com>
---
 .../kvm/aarch64/pmu_event_filter_test.c       | 38 +++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
index 308b8677e08e..1abbe6d8deb2 100644
--- a/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
@@ -8,6 +8,7 @@
  * This test checks if the guest only see the limited pmu event that userspace
  * sets, if the guest can use those events which user allow, and if the guest
  * can't use those events which user deny.
+ * It also checks that setting invalid filter ranges return the expected error.
  * This test runs only when KVM_CAP_ARM_PMU_V3, KVM_ARM_VCPU_PMU_V3_FILTER
  * is supported on the host.
  */
@@ -31,6 +32,7 @@ struct vpmu_vm {
 
 static struct vpmu_vm vpmu_vm;
 
+#define KVM_PMU_EVENT_INVALID 3
 #define FILTER_NR 10
 
 struct test_desc {
@@ -181,6 +183,40 @@ static void destroy_vpmu_vm(void)
 	kvm_vm_free(vpmu_vm.vm);
 }
 
+static void test_invalid_filter(void)
+{
+	struct kvm_pmu_event_filter invalid;
+	int ret;
+
+	pr_info("Test: test_invalid_filter\n");
+
+	memset(&vpmu_vm, 0, sizeof(vpmu_vm));
+
+	vpmu_vm.vm = vm_create(1);
+	vpmu_vm.vcpu = vm_vcpu_add_with_vpmu(vpmu_vm.vm, 0, guest_code);
+	vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64);
+	__TEST_REQUIRE(vpmu_vm.gic_fd >= 0,
+		       "Failed to create vgic-v3, skipping");
+
+	/* The max event number is (1 << 16), set a range largeer than it. */
+	invalid = __DEFINE_FILTER(BIT(15), BIT(15) + 1, KVM_PMU_EVENT_ALLOW);
+	ret = __kvm_device_attr_set(vpmu_vm.vcpu->fd, KVM_ARM_VCPU_PMU_V3_CTRL,
+				    KVM_ARM_VCPU_PMU_V3_FILTER, &invalid);
+	TEST_ASSERT(ret && errno == EINVAL, "Set Invalid filter range "
+		    "ret = %d, errno = %d (expected ret = -1, errno = EINVAL)",
+		    ret, errno);
+
+	/* Set the Invalid action. */
+	invalid = __DEFINE_FILTER(0, 1, KVM_PMU_EVENT_INVALID);
+	ret = __kvm_device_attr_set(vpmu_vm.vcpu->fd, KVM_ARM_VCPU_PMU_V3_CTRL,
+				    KVM_ARM_VCPU_PMU_V3_FILTER, &invalid);
+	TEST_ASSERT(ret && errno == EINVAL, "Set Invalid filter action "
+		    "ret = %d, errno = %d (expected ret = -1, errno = EINVAL)",
+		    ret, errno);
+
+	destroy_vpmu_vm();
+}
+
 static void run_test(struct test_desc *t)
 {
 	pr_info("Test: %s\n", t->name);
@@ -311,4 +347,6 @@ int main(void)
 	TEST_REQUIRE(kvm_pmu_support_events());
 
 	run_tests();
+
+	test_invalid_filter();
 }
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v10 2/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test
  2024-06-19  8:31 ` [PATCH v10 2/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
@ 2024-06-20 22:26   ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 6+ messages in thread
From: Raghavendra Rao Ananta @ 2024-06-20 22:26 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Oliver Upton, Marc Zyngier, kvmarm, Paolo Bonzini, Shuah Khan,
	James Morse, Suzuki K Poulose, Zenghui Yu, linux-kernel, kvm,
	linux-kselftest, linux-arm-kernel

Hi Shaoqin,

On Wed, Jun 19, 2024 at 1:33 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Introduce pmu_event_filter_test for arm64 platforms. The test configures
> PMUv3 for a vCPU, and sets different pmu event filters for the vCPU, and
> check if the guest can see those events which user allow and can't use
> those events which use deny.
>
> This test refactor the create_vpmu_vm() and make it a wrapper for
> __create_vpmu_vm(), which allows some extra init code before
> KVM_ARM_VCPU_PMU_V3_INIT.
>
> And this test use the KVM_ARM_VCPU_PMU_V3_FILTER attribute to set the
> pmu event filter in KVM. And choose to filter two common event
> branches_retired and instructions_retired, and let the guest to check if
> it see the right pmceid register.
>
> Signed-off-by: Shaoqin Huang <shahuang@redhat.com>
> ---
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../kvm/aarch64/pmu_event_filter_test.c       | 314 ++++++++++++++++++
>  2 files changed, 315 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index ac280dcba996..2110b49e7a84 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -153,6 +153,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/aarch32_id_regs
>  TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
>  TEST_GEN_PROGS_aarch64 += aarch64/hypercalls
>  TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test
> +TEST_GEN_PROGS_aarch64 += aarch64/pmu_event_filter_test
>  TEST_GEN_PROGS_aarch64 += aarch64/psci_test
>  TEST_GEN_PROGS_aarch64 += aarch64/set_id_regs
>  TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
> diff --git a/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
> new file mode 100644
> index 000000000000..308b8677e08e
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c
> @@ -0,0 +1,314 @@
> +
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * pmu_event_filter_test - Test user limit pmu event for guest.
> + *
> + * Copyright (c) 2023 Red Hat, Inc.
> + *
> + * This test checks if the guest only see the limited pmu event that userspace
> + * sets, if the guest can use those events which user allow, and if the guest
> + * can't use those events which user deny.
> + * This test runs only when KVM_CAP_ARM_PMU_V3, KVM_ARM_VCPU_PMU_V3_FILTER
> + * is supported on the host.
> + */
> +#include <kvm_util.h>
> +#include <processor.h>
> +#include <vgic.h>
> +#include <vpmu.h>
> +#include <test_util.h>
> +#include <perf/arm_pmuv3.h>
> +
> +struct pmu_common_event_ids {
> +       uint64_t pmceid0;
> +       uint64_t pmceid1;
> +} max_pmce, expected_pmce;
> +
> +struct vpmu_vm {
> +       struct kvm_vm *vm;
> +       struct kvm_vcpu *vcpu;
> +       int gic_fd;
> +};
> +
> +static struct vpmu_vm vpmu_vm;
> +
> +#define FILTER_NR 10
> +
> +struct test_desc {
> +       const char *name;
> +       struct kvm_pmu_event_filter filter[FILTER_NR];
> +};
> +
> +#define __DEFINE_FILTER(base, num, act)                \
> +       ((struct kvm_pmu_event_filter) {        \
> +               .base_event     = base,         \
> +               .nevents        = num,          \
> +               .action         = act,          \
> +       })
> +
> +#define DEFINE_FILTER(base, act) __DEFINE_FILTER(base, 1, act)
> +
> +#define EVENT_ALLOW(event)     DEFINE_FILTER(event, KVM_PMU_EVENT_ALLOW)
> +#define EVENT_DENY(event)      DEFINE_FILTER(event, KVM_PMU_EVENT_DENY)
> +
> +static void guest_code(void)
> +{
> +       uint64_t pmceid0 = read_sysreg(pmceid0_el0);
> +       uint64_t pmceid1 = read_sysreg(pmceid1_el0);
> +
> +       GUEST_ASSERT_EQ(expected_pmce.pmceid0, pmceid0);
> +       GUEST_ASSERT_EQ(expected_pmce.pmceid1, pmceid1);
> +
> +       GUEST_DONE();
> +}
> +
> +static void guest_get_pmceid(void)
> +{
> +       max_pmce.pmceid0 = read_sysreg(pmceid0_el0);
> +       max_pmce.pmceid1 = read_sysreg(pmceid1_el0);
> +
> +       GUEST_DONE();
> +}
> +
> +static void run_vcpu(struct kvm_vcpu *vcpu)
> +{
> +       struct ucall uc;
> +
> +       while (1) {
> +               vcpu_run(vcpu);
> +               switch (get_ucall(vcpu, &uc)) {
> +               case UCALL_DONE:
> +                       return;
> +               case UCALL_ABORT:
> +                       REPORT_GUEST_ASSERT(uc);
> +                       break;
> +               default:
> +                       TEST_FAIL("Unknown ucall %lu", uc.cmd);
> +               }
> +       }
> +}
> +
> +static void set_pmce(struct pmu_common_event_ids *pmce, int action, int event)
> +{
> +       int base = 0;
> +       uint64_t *pmceid = NULL;
> +
> +       if (event >= 0x4000) {
> +               event -= 0x4000;
> +               base = 32;
> +       }
> +
> +       if (event >= 0 && event <= 0x1F) {
> +               pmceid = &pmce->pmceid0;
> +       } else if (event >= 0x20 && event <= 0x3F) {
> +               event -= 0x20;
> +               pmceid = &pmce->pmceid1;
> +       } else {
> +               return;
> +       }
> +
> +       event += base;
> +       if (action == KVM_PMU_EVENT_ALLOW)
> +               *pmceid |= BIT(event);
> +       else
> +               *pmceid &= ~BIT(event);
> +}
> +
> +static inline bool is_valid_filter(struct kvm_pmu_event_filter *filter)
> +{
> +       return filter && filter->nevents != 0;
> +}
> +
> +static void prepare_expected_pmce(struct kvm_pmu_event_filter *filter)
> +{
> +       struct pmu_common_event_ids pmce_mask = { ~0, ~0 };
> +       int i;
> +
> +       if (is_valid_filter(filter) && filter->action == KVM_PMU_EVENT_ALLOW)
> +               memset(&pmce_mask, 0, sizeof(pmce_mask));
> +
> +       while (is_valid_filter(filter)) {
> +               for (i = 0; i < filter->nevents; i++)
> +                       set_pmce(&pmce_mask, filter->action,
> +                                filter->base_event + i);
> +               filter++;
> +       }
> +
> +       expected_pmce.pmceid0 = max_pmce.pmceid0 & pmce_mask.pmceid0;
> +       expected_pmce.pmceid1 = max_pmce.pmceid1 & pmce_mask.pmceid1;
> +}
> +
> +static void pmu_event_filter_init(struct kvm_pmu_event_filter *filter)
> +{
> +       while (is_valid_filter(filter)) {
> +               kvm_device_attr_set(vpmu_vm.vcpu->fd,
> +                                   KVM_ARM_VCPU_PMU_V3_CTRL,
> +                                   KVM_ARM_VCPU_PMU_V3_FILTER,
> +                                   filter);
> +               filter++;
> +       }
> +}
> +
> +/* Create a VM that has one vCPU with PMUv3 configured. */
> +static void create_vpmu_vm_with_filter(void *guest_code,
> +                                      struct kvm_pmu_event_filter *filter)
> +{
> +       uint64_t irq = 23;
> +
> +       /* The test creates the vpmu_vm multiple times. Ensure a clean state */
> +       memset(&vpmu_vm, 0, sizeof(vpmu_vm));
> +
> +       vpmu_vm.vm = vm_create(1);
> +       vpmu_vm.vcpu = vm_vcpu_add_with_vpmu(vpmu_vm.vm, 0, guest_code);
> +       vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64);
> +       __TEST_REQUIRE(vpmu_vm.gic_fd >= 0,
> +                      "Failed to create vgic-v3, skipping");
> +
> +       pmu_event_filter_init(filter);
> +
> +       /* Initialize vPMU */
> +       vpmu_set_irq(vpmu_vm.vcpu, irq);
> +       vpmu_init(vpmu_vm.vcpu);
> +}
> +
> +static void create_vpmu_vm(void *guest_code)
> +{
> +       create_vpmu_vm_with_filter(guest_code, NULL);
> +}
> +
> +static void destroy_vpmu_vm(void)
> +{
> +       close(vpmu_vm.gic_fd);
> +       kvm_vm_free(vpmu_vm.vm);
> +}
> +
> +static void run_test(struct test_desc *t)
> +{
> +       pr_info("Test: %s\n", t->name);
> +
> +       create_vpmu_vm_with_filter(guest_code, t->filter);
> +       prepare_expected_pmce(t->filter);
> +       sync_global_to_guest(vpmu_vm.vm, expected_pmce);
> +
> +       run_vcpu(vpmu_vm.vcpu);
> +
> +       destroy_vpmu_vm();
> +}
> +
> +static struct test_desc tests[] = {
> +       {
> +               .name = "without_filter",
> +               .filter = {
> +                       { 0 }
> +               },
> +       },
> +       {
> +               .name = "member_allow_filter",
> +               .filter = {
> +                       EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_SW_INCR),
> +                       EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +                       EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_BR_RETIRED),
> +                       { 0 },
> +               },
> +       },
> +       {
> +               .name = "member_deny_filter",
> +               .filter = {
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_SW_INCR),
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_BR_RETIRED),
> +                       { 0 },
> +               },
> +       },
> +       {
> +               .name = "not_member_deny_filter",
> +               .filter = {
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_SW_INCR),
> +                       { 0 },
> +               },
> +       },
> +       {
> +               .name = "not_member_allow_filter",
> +               .filter = {
> +                       EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_SW_INCR),
> +                       { 0 },
> +               },
> +       },
> +       {
> +               .name = "deny_chain_filter",
> +               .filter = {
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_CHAIN),
> +                       { 0 },
> +               },
> +       },
> +       {
> +               .name = "deny_cpu_cycles_filter",
> +               .filter = {
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +                       { 0 },
> +               },
> +       },
> +       {
> +               .name = "cancel_allow_filter",
> +               .filter = {
> +                       EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +               },
> +       },
> +       {
> +               .name = "cancel_deny_filter",
> +               .filter = {
> +                       EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +                       EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +               },
> +       },
> +       {
> +               .name = "multiple_filter",
> +               .filter = {
> +                       __DEFINE_FILTER(0x0, 0x10, KVM_PMU_EVENT_ALLOW),
> +                       __DEFINE_FILTER(0x6, 0x3, KVM_PMU_EVENT_DENY),
> +               },
> +       },
> +       { 0 }
> +};
> +
> +static void run_tests(void)
> +{
> +       struct test_desc *t;
> +
> +       for (t = &tests[0]; t->name; t++)
> +               run_test(t);
> +}
> +
> +static int used_pmu_events[] = {
> +       ARMV8_PMUV3_PERFCTR_BR_RETIRED,
> +       ARMV8_PMUV3_PERFCTR_INST_RETIRED,
> +       ARMV8_PMUV3_PERFCTR_CHAIN,
> +       ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
> +};
> +
> +static bool kvm_pmu_support_events(void)
> +{
> +       struct pmu_common_event_ids used_pmce = { 0, 0 };
> +
> +       create_vpmu_vm(guest_get_pmceid);
> +
> +       memset(&max_pmce, 0, sizeof(max_pmce));
> +       sync_global_to_guest(vpmu_vm.vm, max_pmce);
> +       run_vcpu(vpmu_vm.vcpu);
> +       sync_global_from_guest(vpmu_vm.vm, max_pmce);
> +       destroy_vpmu_vm();
> +
> +       for (int i = 0; i < ARRAY_SIZE(used_pmu_events); i++)
> +               set_pmce(&used_pmce, KVM_PMU_EVENT_ALLOW, used_pmu_events[i]);
> +
> +       return ((max_pmce.pmceid0 & used_pmce.pmceid0) == used_pmce.pmceid0) &&
> +              ((max_pmce.pmceid1 & used_pmce.pmceid1) == used_pmce.pmceid1);
> +}
> +
> +int main(void)
> +{
> +       TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
> +       TEST_REQUIRE(kvm_pmu_support_events());
> +
> +       run_tests();
> +}
> --
> 2.40.1
>
>
Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>

- Raghavendra


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v10 1/3] KVM: selftests: aarch64: Add helper function for the vpmu vcpu creation
  2024-06-19  8:31 ` [PATCH v10 1/3] KVM: selftests: aarch64: Add helper function for the vpmu vcpu creation Shaoqin Huang
@ 2024-06-20 22:30   ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 6+ messages in thread
From: Raghavendra Rao Ananta @ 2024-06-20 22:30 UTC (permalink / raw)
  To: Shaoqin Huang
  Cc: Oliver Upton, Marc Zyngier, kvmarm, Eric Auger, James Morse,
	Suzuki K Poulose, Zenghui Yu, Paolo Bonzini, Shuah Khan,
	linux-kernel, linux-arm-kernel, kvm, linux-kselftest

Hi Shaoqin

On Wed, Jun 19, 2024 at 1:32 AM Shaoqin Huang <shahuang@redhat.com> wrote:
>
> Create a vcpu with vpmu would be a common requirement for the vpmu test,
> so add the helper function for the vpmu vcpu creation. And use those
> helper function in the vpmu_counter_access.c test.
>
> Use this chance to delete the meaningless ASSERT about the pmuver,
> because KVM does not advertise an IMP_DEF PMU to guests.
>
> No functional changes intended.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Shaoqin Huang <shahuang@redhat.com>
> ---
>  .../kvm/aarch64/vpmu_counter_access.c         | 32 ++++---------------
>  .../selftests/kvm/include/aarch64/vpmu.h      | 28 ++++++++++++++++
>  2 files changed, 34 insertions(+), 26 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/aarch64/vpmu.h
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> index d31b9f64ba14..68da44198719 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
> @@ -16,6 +16,7 @@
>  #include <processor.h>
>  #include <test_util.h>
>  #include <vgic.h>
> +#include <vpmu.h>
>  #include <perf/arm_pmuv3.h>
>  #include <linux/bitfield.h>
>
> @@ -407,18 +408,8 @@ static void guest_code(uint64_t expected_pmcr_n)
>  /* Create a VM that has one vCPU with PMUv3 configured. */
>  static void create_vpmu_vm(void *guest_code)
>  {
> -       struct kvm_vcpu_init init;
> -       uint8_t pmuver, ec;
> -       uint64_t dfr0, irq = 23;
> -       struct kvm_device_attr irq_attr = {
> -               .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> -               .attr = KVM_ARM_VCPU_PMU_V3_IRQ,
> -               .addr = (uint64_t)&irq,
> -       };
> -       struct kvm_device_attr init_attr = {
> -               .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> -               .attr = KVM_ARM_VCPU_PMU_V3_INIT,
> -       };
> +       uint8_t ec;
> +       uint64_t irq = 23;
>
>         /* The test creates the vpmu_vm multiple times. Ensure a clean state */
>         memset(&vpmu_vm, 0, sizeof(vpmu_vm));
> @@ -430,26 +421,15 @@ static void create_vpmu_vm(void *guest_code)
>                                         guest_sync_handler);
>         }
>
> -       /* Create vCPU with PMUv3 */
> -       vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
> -       init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> -       vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
> +       vpmu_vm.vcpu = vm_vcpu_add_with_vpmu(vpmu_vm.vm, 0, guest_code);
>         vcpu_init_descriptor_tables(vpmu_vm.vcpu);
>         vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64);
>         __TEST_REQUIRE(vpmu_vm.gic_fd >= 0,
>                        "Failed to create vgic-v3, skipping");
>
> -       /* Make sure that PMUv3 support is indicated in the ID register */
> -       vcpu_get_reg(vpmu_vm.vcpu,
> -                    KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
> -       pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0);
> -       TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF &&
> -                   pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP,
> -                   "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
> -
>         /* Initialize vPMU */
> -       vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
> -       vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
> +       vpmu_set_irq(vpmu_vm.vcpu, irq);
> +       vpmu_init(vpmu_vm.vcpu);
>  }
>
>  static void destroy_vpmu_vm(void)
> diff --git a/tools/testing/selftests/kvm/include/aarch64/vpmu.h b/tools/testing/selftests/kvm/include/aarch64/vpmu.h
> new file mode 100644
> index 000000000000..5ef6cb011e41
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/aarch64/vpmu.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <kvm_util.h>
> +
> +static inline struct kvm_vcpu *vm_vcpu_add_with_vpmu(struct kvm_vm *vm,
> +                                                    uint32_t vcpu_id,
> +                                                    void *guest_code)
> +{
> +       struct kvm_vcpu_init init;
> +
> +       /* Create vCPU with PMUv3 */
> +       vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
> +       init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> +
> +       return aarch64_vcpu_add(vm, 0, &init, guest_code);
> +}
> +
> +static void vpmu_set_irq(struct kvm_vcpu *vcpu, int irq)
> +{
> +       kvm_device_attr_set(vcpu->fd, KVM_ARM_VCPU_PMU_V3_CTRL,
> +                           KVM_ARM_VCPU_PMU_V3_IRQ, &irq);
> +}
> +
> +static void vpmu_init(struct kvm_vcpu *vcpu)
> +{
> +       kvm_device_attr_set(vcpu->fd, KVM_ARM_VCPU_PMU_V3_CTRL,
> +                           KVM_ARM_VCPU_PMU_V3_INIT, NULL);
> +}
> --
> 2.40.1
>
>
Reviewed-by: Raghavendra Rao Ananta <rananta@google.com>

- Raghavendra


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-06-20 22:30 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-19  8:31 [PATCH v10 0/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
2024-06-19  8:31 ` [PATCH v10 1/3] KVM: selftests: aarch64: Add helper function for the vpmu vcpu creation Shaoqin Huang
2024-06-20 22:30   ` Raghavendra Rao Ananta
2024-06-19  8:31 ` [PATCH v10 2/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Shaoqin Huang
2024-06-20 22:26   ` Raghavendra Rao Ananta
2024-06-19  8:31 ` [PATCH v10 3/3] KVM: selftests: aarch64: Add invalid filter test in pmu_event_filter_test Shaoqin Huang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).