* [PATCH v3 0/6] KVM: selftests: add powerpc support
@ 2023-06-08 3:24 Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 1/6] KVM: selftests: Move pgd_created check into virt_pgd_alloc Nicholas Piggin
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Nicholas Piggin @ 2023-06-08 3:24 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: linuxppc-dev, Nicholas Piggin
This series adds initial KVM selftests support for powerpc
(64-bit, BookS, radix MMU).
Since v2:
- Added a couple of new tests (patch 5,6)
- Make default page size match host page size.
- Check for radix MMU capability.
- Build a few more of the generic tests.
Since v1:
- Update MAINTAINERS KVM PPC entry to include kvm selftests.
- Fixes and cleanups from Sean's review including new patch 1.
- Add 4K guest page support requiring new patch 2.
Thanks,
Nick
Nicholas Piggin (6):
KVM: selftests: Move pgd_created check into virt_pgd_alloc
KVM: selftests: Add aligned guest physical page allocator
KVM: PPC: selftests: add support for powerpc
KVM: PPC: selftests: add selftests sanity tests
KVM: PPC: selftests: Add a TLBIEL virtualisation tester
KVM: PPC: selftests: Add interrupt performance tester
MAINTAINERS | 2 +
tools/testing/selftests/kvm/Makefile | 23 +
.../selftests/kvm/include/kvm_util_base.h | 29 +
.../selftests/kvm/include/powerpc/hcall.h | 21 +
.../selftests/kvm/include/powerpc/ppc_asm.h | 32 ++
.../selftests/kvm/include/powerpc/processor.h | 46 ++
.../selftests/kvm/lib/aarch64/processor.c | 4 -
tools/testing/selftests/kvm/lib/guest_modes.c | 27 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 56 +-
.../selftests/kvm/lib/powerpc/handlers.S | 93 +++
.../testing/selftests/kvm/lib/powerpc/hcall.c | 45 ++
.../selftests/kvm/lib/powerpc/processor.c | 541 ++++++++++++++++++
.../testing/selftests/kvm/lib/powerpc/ucall.c | 30 +
.../selftests/kvm/lib/riscv/processor.c | 4 -
.../selftests/kvm/lib/s390x/processor.c | 4 -
.../selftests/kvm/lib/x86_64/processor.c | 7 +-
tools/testing/selftests/kvm/powerpc/helpers.h | 46 ++
.../selftests/kvm/powerpc/interrupt_perf.c | 199 +++++++
.../testing/selftests/kvm/powerpc/null_test.c | 166 ++++++
.../selftests/kvm/powerpc/rtas_hcall.c | 136 +++++
.../selftests/kvm/powerpc/tlbiel_test.c | 508 ++++++++++++++++
21 files changed, 1981 insertions(+), 38 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/powerpc/hcall.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/processor.h
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/handlers.S
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/hcall.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/processor.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/ucall.c
create mode 100644 tools/testing/selftests/kvm/powerpc/helpers.h
create mode 100644 tools/testing/selftests/kvm/powerpc/interrupt_perf.c
create mode 100644 tools/testing/selftests/kvm/powerpc/null_test.c
create mode 100644 tools/testing/selftests/kvm/powerpc/rtas_hcall.c
create mode 100644 tools/testing/selftests/kvm/powerpc/tlbiel_test.c
--
2.40.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/6] KVM: selftests: Move pgd_created check into virt_pgd_alloc
2023-06-08 3:24 [PATCH v3 0/6] KVM: selftests: add powerpc support Nicholas Piggin
@ 2023-06-08 3:24 ` Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 2/6] KVM: selftests: Add aligned guest physical page allocator Nicholas Piggin
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Nicholas Piggin @ 2023-06-08 3:24 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: linuxppc-dev, Nicholas Piggin
virt_arch_pgd_alloc all do the same test and set of pgd_created. Move
this into common code.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
tools/testing/selftests/kvm/include/kvm_util_base.h | 5 +++++
tools/testing/selftests/kvm/lib/aarch64/processor.c | 4 ----
tools/testing/selftests/kvm/lib/riscv/processor.c | 4 ----
tools/testing/selftests/kvm/lib/s390x/processor.c | 4 ----
tools/testing/selftests/kvm/lib/x86_64/processor.c | 7 ++-----
5 files changed, 7 insertions(+), 17 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index a089c356f354..d630a0a1877c 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -822,7 +822,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm);
static inline void virt_pgd_alloc(struct kvm_vm *vm)
{
+ if (vm->pgd_created)
+ return;
+
virt_arch_pgd_alloc(vm);
+
+ vm->pgd_created = true;
}
/*
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 3a0259e25335..3da3ec7c5b23 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -96,13 +96,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
size_t nr_pages = page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size;
- if (vm->pgd_created)
- return;
-
vm->pgd = vm_phy_pages_alloc(vm, nr_pages,
KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
- vm->pgd_created = true;
}
static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index d146ca71e0c0..7695ba2cd369 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -57,13 +57,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
size_t nr_pages = page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size;
- if (vm->pgd_created)
- return;
-
vm->pgd = vm_phy_pages_alloc(vm, nr_pages,
KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
- vm->pgd_created = true;
}
void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
index 15945121daf1..358e03f09c7a 100644
--- a/tools/testing/selftests/kvm/lib/s390x/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -17,16 +17,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
vm->page_size);
- if (vm->pgd_created)
- return;
-
paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
vm->pgd = paddr;
- vm->pgd_created = true;
}
/*
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index d4a0b504b1e0..d4deb2718e86 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -127,11 +127,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use "
"unknown or unsupported guest mode, mode: 0x%x", vm->mode);
- /* If needed, create page map l4 table. */
- if (!vm->pgd_created) {
- vm->pgd = vm_alloc_page_table(vm);
- vm->pgd_created = true;
- }
+ /* Create page map l4 table. */
+ vm->pgd = vm_alloc_page_table(vm);
}
static void *virt_get_pte(struct kvm_vm *vm, uint64_t *parent_pte,
--
2.40.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/6] KVM: selftests: Add aligned guest physical page allocator
2023-06-08 3:24 [PATCH v3 0/6] KVM: selftests: add powerpc support Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 1/6] KVM: selftests: Move pgd_created check into virt_pgd_alloc Nicholas Piggin
@ 2023-06-08 3:24 ` Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc Nicholas Piggin
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Nicholas Piggin @ 2023-06-08 3:24 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: linuxppc-dev, Nicholas Piggin
powerpc will require this to allocate MMU tables in guest memory that
are larger than guest base page size.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
.../selftests/kvm/include/kvm_util_base.h | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 44 ++++++++++++-------
2 files changed, 29 insertions(+), 17 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index d630a0a1877c..42d03ae08ecb 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -680,6 +680,8 @@ const char *exit_reason_str(unsigned int exit_reason);
vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
uint32_t memslot);
+vm_paddr_t vm_phy_pages_alloc_align(struct kvm_vm *vm, size_t num, size_t align,
+ vm_paddr_t paddr_min, uint32_t memslot);
vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
vm_paddr_t paddr_min, uint32_t memslot);
vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 298c4372fb1a..68558d60f949 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1903,6 +1903,7 @@ const char *exit_reason_str(unsigned int exit_reason)
* Input Args:
* vm - Virtual Machine
* num - number of pages
+ * align - pages alignment
* paddr_min - Physical address minimum
* memslot - Memory region to allocate page from
*
@@ -1916,7 +1917,7 @@ const char *exit_reason_str(unsigned int exit_reason)
* and their base address is returned. A TEST_ASSERT failure occurs if
* not enough pages are available at or above paddr_min.
*/
-vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+vm_paddr_t vm_phy_pages_alloc_align(struct kvm_vm *vm, size_t num, size_t align,
vm_paddr_t paddr_min, uint32_t memslot)
{
struct userspace_mem_region *region;
@@ -1930,24 +1931,27 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
paddr_min, vm->page_size);
region = memslot2region(vm, memslot);
- base = pg = paddr_min >> vm->page_shift;
-
- do {
- for (; pg < base + num; ++pg) {
- if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
- base = pg = sparsebit_next_set(region->unused_phy_pages, pg);
- break;
+ base = paddr_min >> vm->page_shift;
+
+again:
+ base = (base + align - 1) & ~(align - 1);
+ for (pg = base; pg < base + num; ++pg) {
+ if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
+ base = sparsebit_next_set(region->unused_phy_pages, pg);
+ if (!base) {
+ fprintf(stderr, "No guest physical pages "
+ "available, paddr_min: 0x%lx "
+ "page_size: 0x%x memslot: %u "
+ "num_pages: %lu align: %lu\n",
+ paddr_min, vm->page_size, memslot,
+ num, align);
+ fputs("---- vm dump ----\n", stderr);
+ vm_dump(stderr, vm, 2);
+ TEST_ASSERT(false, "false");
+ abort();
}
+ goto again;
}
- } while (pg && pg != base + num);
-
- if (pg == 0) {
- fprintf(stderr, "No guest physical page available, "
- "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
- paddr_min, vm->page_size, memslot);
- fputs("---- vm dump ----\n", stderr);
- vm_dump(stderr, vm, 2);
- abort();
}
for (pg = base; pg < base + num; ++pg)
@@ -1956,6 +1960,12 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
return base * vm->page_size;
}
+vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ vm_paddr_t paddr_min, uint32_t memslot)
+{
+ return vm_phy_pages_alloc_align(vm, num, 1, paddr_min, memslot);
+}
+
vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
uint32_t memslot)
{
--
2.40.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc
2023-06-08 3:24 [PATCH v3 0/6] KVM: selftests: add powerpc support Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 1/6] KVM: selftests: Move pgd_created check into virt_pgd_alloc Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 2/6] KVM: selftests: Add aligned guest physical page allocator Nicholas Piggin
@ 2023-06-08 3:24 ` Nicholas Piggin
2023-06-14 0:20 ` Joel Stanley
2023-08-02 22:44 ` Sean Christopherson
2023-06-08 3:24 ` [PATCH v3 4/6] KVM: PPC: selftests: add selftests sanity tests Nicholas Piggin
` (2 subsequent siblings)
5 siblings, 2 replies; 9+ messages in thread
From: Nicholas Piggin @ 2023-06-08 3:24 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: linuxppc-dev, Nicholas Piggin
Implement KVM selftests support for powerpc (Book3S-64).
ucalls are implemented with an unsuppored PAPR hcall number which will
always cause KVM to exit to userspace.
Virtual memory is implemented for the radix MMU, and only a base page
size is supported (both 4K and 64K).
Guest interrupts are taken in real-mode, so require a page allocated at
gRA 0x0. Interrupt entry is complicated because gVA:gRA is not 1:1 mapped
(like the kernel is), so the MMU can not just just be switched on and
off.
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
MAINTAINERS | 2 +
tools/testing/selftests/kvm/Makefile | 19 +
.../selftests/kvm/include/kvm_util_base.h | 22 +
.../selftests/kvm/include/powerpc/hcall.h | 19 +
.../selftests/kvm/include/powerpc/ppc_asm.h | 32 ++
.../selftests/kvm/include/powerpc/processor.h | 39 ++
tools/testing/selftests/kvm/lib/guest_modes.c | 27 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 12 +
.../selftests/kvm/lib/powerpc/handlers.S | 93 ++++
.../testing/selftests/kvm/lib/powerpc/hcall.c | 45 ++
.../selftests/kvm/lib/powerpc/processor.c | 439 ++++++++++++++++++
.../testing/selftests/kvm/lib/powerpc/ucall.c | 30 ++
tools/testing/selftests/kvm/powerpc/helpers.h | 46 ++
13 files changed, 821 insertions(+), 4 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/powerpc/hcall.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/processor.h
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/handlers.S
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/hcall.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/processor.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/ucall.c
create mode 100644 tools/testing/selftests/kvm/powerpc/helpers.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 44417acd2936..39afb356369e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11391,6 +11391,8 @@ F: arch/powerpc/include/asm/kvm*
F: arch/powerpc/include/uapi/asm/kvm*
F: arch/powerpc/kernel/kvm*
F: arch/powerpc/kvm/
+F: tools/testing/selftests/kvm/*/powerpc/
+F: tools/testing/selftests/kvm/powerpc/
KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)
M: Anup Patel <anup@brainfault.org>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 4761b768b773..53cd3ce63dec 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -55,6 +55,11 @@ LIBKVM_s390x += lib/s390x/ucall.c
LIBKVM_riscv += lib/riscv/processor.c
LIBKVM_riscv += lib/riscv/ucall.c
+LIBKVM_powerpc += lib/powerpc/handlers.S
+LIBKVM_powerpc += lib/powerpc/processor.c
+LIBKVM_powerpc += lib/powerpc/ucall.c
+LIBKVM_powerpc += lib/powerpc/hcall.c
+
# Non-compiled test targets
TEST_PROGS_x86_64 += x86_64/nx_huge_pages_test.sh
@@ -179,6 +184,20 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test
TEST_GEN_PROGS_riscv += set_memory_region_test
TEST_GEN_PROGS_riscv += kvm_binary_stats_test
+TEST_GEN_PROGS_powerpc += access_tracking_perf_test
+TEST_GEN_PROGS_powerpc += demand_paging_test
+TEST_GEN_PROGS_powerpc += dirty_log_test
+TEST_GEN_PROGS_powerpc += dirty_log_perf_test
+TEST_GEN_PROGS_powerpc += hardware_disable_test
+TEST_GEN_PROGS_powerpc += kvm_create_max_vcpus
+TEST_GEN_PROGS_powerpc += kvm_page_table_test
+TEST_GEN_PROGS_powerpc += max_guest_memory_test
+TEST_GEN_PROGS_powerpc += memslot_modification_stress_test
+TEST_GEN_PROGS_powerpc += memslot_perf_test
+TEST_GEN_PROGS_powerpc += rseq_test
+TEST_GEN_PROGS_powerpc += set_memory_region_test
+TEST_GEN_PROGS_powerpc += kvm_binary_stats_test
+
TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR))
TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR))
TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(ARCH_DIR))
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 42d03ae08ecb..17b80709b894 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -105,6 +105,7 @@ struct kvm_vm {
bool pgd_created;
vm_paddr_t ucall_mmio_addr;
vm_paddr_t pgd;
+ vm_paddr_t prtb; // powerpc process table
vm_vaddr_t gdt;
vm_vaddr_t tss;
vm_vaddr_t idt;
@@ -160,6 +161,8 @@ enum vm_guest_mode {
VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */
VM_MODE_P47V64_4K,
VM_MODE_P44V64_4K,
+ VM_MODE_P52V52_4K,
+ VM_MODE_P52V52_64K,
VM_MODE_P36V48_4K,
VM_MODE_P36V48_16K,
VM_MODE_P36V48_64K,
@@ -197,6 +200,25 @@ extern enum vm_guest_mode vm_mode_default;
#define MIN_PAGE_SHIFT 12U
#define ptes_per_page(page_size) ((page_size) / 8)
+#elif defined(__powerpc64__)
+
+extern enum vm_guest_mode vm_mode_default;
+
+#define VM_MODE_DEFAULT vm_mode_default
+
+/*
+ * XXX: This is a hack to allocate more memory for page tables because we
+ * don't pack "fragments" well with 64K page sizes. Should rework generic
+ * code to allow more flexible page table memory estimation (and fix our
+ * page table allocation).
+ */
+#define MIN_PAGE_SHIFT 12U
+#define ptes_per_page(page_size) ((page_size) / 8)
+
+#else
+
+#error "KVM selftests not implemented for architecture"
+
#endif
#define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT)
diff --git a/tools/testing/selftests/kvm/include/powerpc/hcall.h b/tools/testing/selftests/kvm/include/powerpc/hcall.h
new file mode 100644
index 000000000000..ba119f5a3fef
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/hcall.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * powerpc hcall defines
+ */
+#ifndef SELFTEST_KVM_HCALL_H
+#define SELFTEST_KVM_HCALL_H
+
+#include <linux/compiler.h>
+
+/* Ucalls use unimplemented PAPR hcall 0 which exits KVM */
+#define H_UCALL 0
+#define UCALL_R4_UCALL 0x5715 // regular ucall, r5 contains ucall pointer
+#define UCALL_R4_SIMPLE 0x0000 // simple exit usable by asm with no ucall data
+
+int64_t hcall0(uint64_t token);
+int64_t hcall1(uint64_t token, uint64_t arg1);
+int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2);
+
+#endif
diff --git a/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h b/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
new file mode 100644
index 000000000000..b9df64659792
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * powerpc asm specific defines
+ */
+#ifndef SELFTEST_KVM_PPC_ASM_H
+#define SELFTEST_KVM_PPC_ASM_H
+
+#define STACK_FRAME_MIN_SIZE 112 /* Could be 32 on ELFv2 */
+#define STACK_REDZONE_SIZE 512
+
+#define INT_FRAME_SIZE (STACK_FRAME_MIN_SIZE + STACK_REDZONE_SIZE)
+
+#define SPR_SRR0 0x01a
+#define SPR_SRR1 0x01b
+#define SPR_CFAR 0x01c
+
+#define MSR_SF 0x8000000000000000ULL
+#define MSR_HV 0x1000000000000000ULL
+#define MSR_VEC 0x0000000002000000ULL
+#define MSR_VSX 0x0000000000800000ULL
+#define MSR_EE 0x0000000000008000ULL
+#define MSR_PR 0x0000000000004000ULL
+#define MSR_FP 0x0000000000002000ULL
+#define MSR_ME 0x0000000000001000ULL
+#define MSR_IR 0x0000000000000020ULL
+#define MSR_DR 0x0000000000000010ULL
+#define MSR_RI 0x0000000000000002ULL
+#define MSR_LE 0x0000000000000001ULL
+
+#define LPCR_ILE 0x0000000002000000ULL
+
+#endif
diff --git a/tools/testing/selftests/kvm/include/powerpc/processor.h b/tools/testing/selftests/kvm/include/powerpc/processor.h
new file mode 100644
index 000000000000..ce5a23525dbd
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/processor.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * powerpc processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+#include <linux/compiler.h>
+#include "ppc_asm.h"
+
+extern unsigned char __interrupts_start[];
+extern unsigned char __interrupts_end[];
+
+struct kvm_vm;
+struct kvm_vcpu;
+extern bool (*interrupt_handler)(struct kvm_vcpu *vcpu, unsigned trap);
+
+struct ex_regs {
+ uint64_t gprs[32];
+ uint64_t nia;
+ uint64_t msr;
+ uint64_t cfar;
+ uint64_t lr;
+ uint64_t ctr;
+ uint64_t xer;
+ uint32_t cr;
+ uint32_t trap;
+ uint64_t vaddr; /* vaddr of this struct */
+};
+
+void vm_install_exception_handler(struct kvm_vm *vm, int vector,
+ void (*handler)(struct ex_regs *));
+
+static inline void cpu_relax(void)
+{
+ asm volatile("" ::: "memory");
+}
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c
index 1df3ce4b16fd..4dfaed1706d9 100644
--- a/tools/testing/selftests/kvm/lib/guest_modes.c
+++ b/tools/testing/selftests/kvm/lib/guest_modes.c
@@ -4,7 +4,11 @@
*/
#include "guest_modes.h"
-#ifdef __aarch64__
+#if defined(__powerpc__)
+#include <unistd.h>
+#endif
+
+#if defined(__aarch64__) || defined(__powerpc__)
#include "processor.h"
enum vm_guest_mode vm_mode_default;
#endif
@@ -13,9 +17,7 @@ struct guest_mode guest_modes[NUM_VM_MODES];
void guest_modes_append_default(void)
{
-#ifndef __aarch64__
- guest_mode_append(VM_MODE_DEFAULT, true, true);
-#else
+#ifdef __aarch64__
{
unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
bool ps4k, ps16k, ps64k;
@@ -70,6 +72,8 @@ void guest_modes_append_default(void)
KVM_S390_VM_CPU_PROCESSOR, &info);
close(vm_fd);
close(kvm_fd);
+
+ guest_mode_append(VM_MODE_DEFAULT, true, true);
/* Starting with z13 we have 47bits of physical address */
if (info.ibc >= 0x30)
guest_mode_append(VM_MODE_P47V64_4K, true, true);
@@ -79,12 +83,27 @@ void guest_modes_append_default(void)
{
unsigned int sz = kvm_check_cap(KVM_CAP_VM_GPA_BITS);
+ guest_mode_append(VM_MODE_DEFAULT, true, true);
if (sz >= 52)
guest_mode_append(VM_MODE_P52V48_4K, true, true);
if (sz >= 48)
guest_mode_append(VM_MODE_P48V48_4K, true, true);
}
#endif
+#ifdef __powerpc__
+ {
+ TEST_ASSERT(kvm_check_cap(KVM_CAP_PPC_MMU_RADIX),
+ "Radix MMU not available, KVM selftests "
+ "does not support Hash MMU!");
+ /* Radix guest EA and RA are 52-bit on POWER9 and POWER10 */
+ if (sysconf(_SC_PAGESIZE) == 4096)
+ vm_mode_default = VM_MODE_P52V52_4K;
+ else
+ vm_mode_default = VM_MODE_P52V52_64K;
+ guest_mode_append(VM_MODE_P52V52_4K, true, true);
+ guest_mode_append(VM_MODE_P52V52_64K, true, true);
+ }
+#endif
}
void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 68558d60f949..696989a22c5a 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -158,6 +158,8 @@ const char *vm_guest_mode_string(uint32_t i)
[VM_MODE_PXXV48_4K] = "PA-bits:ANY, VA-bits:48, 4K pages",
[VM_MODE_P47V64_4K] = "PA-bits:47, VA-bits:64, 4K pages",
[VM_MODE_P44V64_4K] = "PA-bits:44, VA-bits:64, 4K pages",
+ [VM_MODE_P52V52_4K] = "PA-bits:52, VA-bits:52, 4K pages",
+ [VM_MODE_P52V52_64K] = "PA-bits:52, VA-bits:52, 64K pages",
[VM_MODE_P36V48_4K] = "PA-bits:36, VA-bits:48, 4K pages",
[VM_MODE_P36V48_16K] = "PA-bits:36, VA-bits:48, 16K pages",
[VM_MODE_P36V48_64K] = "PA-bits:36, VA-bits:48, 64K pages",
@@ -183,6 +185,8 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
[VM_MODE_PXXV48_4K] = { 0, 0, 0x1000, 12 },
[VM_MODE_P47V64_4K] = { 47, 64, 0x1000, 12 },
[VM_MODE_P44V64_4K] = { 44, 64, 0x1000, 12 },
+ [VM_MODE_P52V52_4K] = { 52, 52, 0x1000, 12 },
+ [VM_MODE_P52V52_64K] = { 52, 52, 0x10000, 16 },
[VM_MODE_P36V48_4K] = { 36, 48, 0x1000, 12 },
[VM_MODE_P36V48_16K] = { 36, 48, 0x4000, 14 },
[VM_MODE_P36V48_64K] = { 36, 48, 0x10000, 16 },
@@ -284,6 +288,14 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
case VM_MODE_P44V64_4K:
vm->pgtable_levels = 5;
break;
+#ifdef __powerpc__
+ case VM_MODE_P52V52_64K:
+ vm->pgtable_levels = 4;
+ break;
+ case VM_MODE_P52V52_4K:
+ vm->pgtable_levels = 4;
+ break;
+#endif
default:
TEST_FAIL("Unknown guest mode, mode: 0x%x", mode);
}
diff --git a/tools/testing/selftests/kvm/lib/powerpc/handlers.S b/tools/testing/selftests/kvm/lib/powerpc/handlers.S
new file mode 100644
index 000000000000..a68c187b835f
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/handlers.S
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <ppc_asm.h>
+
+.macro INTERRUPT vec
+. = __interrupts_start + \vec
+ std %r0,(0*8)(%r13)
+ std %r3,(3*8)(%r13)
+ mfspr %r0,SPR_CFAR
+ li %r3,\vec
+ b handle_interrupt
+.endm
+
+.balign 0x1000
+.global __interrupts_start
+__interrupts_start:
+INTERRUPT 0x100
+INTERRUPT 0x200
+INTERRUPT 0x300
+INTERRUPT 0x380
+INTERRUPT 0x400
+INTERRUPT 0x480
+INTERRUPT 0x500
+INTERRUPT 0x600
+INTERRUPT 0x700
+INTERRUPT 0x800
+INTERRUPT 0x900
+INTERRUPT 0xa00
+INTERRUPT 0xc00
+INTERRUPT 0xd00
+INTERRUPT 0xf00
+INTERRUPT 0xf20
+INTERRUPT 0xf40
+INTERRUPT 0xf60
+
+virt_handle_interrupt:
+ stdu %r1,-INT_FRAME_SIZE(%r1)
+ mr %r3,%r31
+ bl route_interrupt
+ ld %r4,(32*8)(%r31) /* NIA */
+ ld %r5,(33*8)(%r31) /* MSR */
+ ld %r6,(35*8)(%r31) /* LR */
+ ld %r7,(36*8)(%r31) /* CTR */
+ ld %r8,(37*8)(%r31) /* XER */
+ lwz %r9,(38*8)(%r31) /* CR */
+ mtspr SPR_SRR0,%r4
+ mtspr SPR_SRR1,%r5
+ mtlr %r6
+ mtctr %r7
+ mtxer %r8
+ mtcr %r9
+reg=4
+ ld %r0,(0*8)(%r31)
+ ld %r3,(3*8)(%r31)
+.rept 28
+ ld reg,(reg*8)(%r31)
+ reg=reg+1
+.endr
+ addi %r1,%r1,INT_FRAME_SIZE
+ rfid
+
+virt_handle_interrupt_p:
+ .llong virt_handle_interrupt
+
+handle_interrupt:
+reg=4
+.rept 28
+ std reg,(reg*8)(%r13)
+ reg=reg+1
+.endr
+ mfspr %r4,SPR_SRR0
+ mfspr %r5,SPR_SRR1
+ mflr %r6
+ mfctr %r7
+ mfxer %r8
+ mfcr %r9
+ std %r4,(32*8)(%r13) /* NIA */
+ std %r5,(33*8)(%r13) /* MSR */
+ std %r0,(34*8)(%r13) /* CFAR */
+ std %r6,(35*8)(%r13) /* LR */
+ std %r7,(36*8)(%r13) /* CTR */
+ std %r8,(37*8)(%r13) /* XER */
+ stw %r9,(38*8)(%r13) /* CR */
+ stw %r3,(38*8 + 4)(%r13) /* TRAP */
+
+ ld %r31,(39*8)(%r13) /* vaddr */
+ ld %r4,virt_handle_interrupt_p - __interrupts_start(0)
+ mtspr SPR_SRR0,%r4
+ /* Reuse SRR1 */
+
+ rfid
+.global __interrupts_end
+__interrupts_end:
diff --git a/tools/testing/selftests/kvm/lib/powerpc/hcall.c b/tools/testing/selftests/kvm/lib/powerpc/hcall.c
new file mode 100644
index 000000000000..23a56aabad42
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/hcall.c
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PAPR (pseries) hcall support.
+ */
+#include "kvm_util.h"
+#include "hcall.h"
+
+int64_t hcall0(uint64_t token)
+{
+ register uintptr_t r3 asm ("r3") = token;
+
+ asm volatile("sc 1" : "+r"(r3) :
+ : "r0", "r4", "r5", "r6", "r7", "r8", "r9",
+ "r10","r11", "r12", "ctr", "xer",
+ "memory");
+
+ return r3;
+}
+
+int64_t hcall1(uint64_t token, uint64_t arg1)
+{
+ register uintptr_t r3 asm ("r3") = token;
+ register uintptr_t r4 asm ("r4") = arg1;
+
+ asm volatile("sc 1" : "+r"(r3), "+r"(r4) :
+ : "r0", "r5", "r6", "r7", "r8", "r9",
+ "r10","r11", "r12", "ctr", "xer",
+ "memory");
+
+ return r3;
+}
+
+int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2)
+{
+ register uintptr_t r3 asm ("r3") = token;
+ register uintptr_t r4 asm ("r4") = arg1;
+ register uintptr_t r5 asm ("r5") = arg2;
+
+ asm volatile("sc 1" : "+r"(r3), "+r"(r4), "+r"(r5) :
+ : "r0", "r6", "r7", "r8", "r9",
+ "r10","r11", "r12", "ctr", "xer",
+ "memory");
+
+ return r3;
+}
diff --git a/tools/testing/selftests/kvm/lib/powerpc/processor.c b/tools/testing/selftests/kvm/lib/powerpc/processor.c
new file mode 100644
index 000000000000..02db2ff86da8
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/processor.c
@@ -0,0 +1,439 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest powerpc library code - CPU-related functions (page tables...)
+ */
+
+#include <linux/sizes.h>
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "kvm_util_base.h"
+#include "guest_modes.h"
+#include "hcall.h"
+
+#define RADIX_TREE_SIZE ((0x2UL << 61) | (0x5UL << 5)) // 52-bits
+#define RADIX_PGD_INDEX_SIZE 13
+
+static void set_proc_table(struct kvm_vm *vm, int pid, uint64_t dw0, uint64_t dw1)
+{
+ uint64_t *proc_table;
+
+ proc_table = addr_gpa2hva(vm, vm->prtb);
+ proc_table[pid * 2 + 0] = cpu_to_be64(dw0);
+ proc_table[pid * 2 + 1] = cpu_to_be64(dw1);
+}
+
+static void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd)
+{
+ set_proc_table(vm, pid, pgd | RADIX_TREE_SIZE | RADIX_PGD_INDEX_SIZE, 0);
+}
+
+void virt_arch_pgd_alloc(struct kvm_vm *vm)
+{
+ struct kvm_ppc_mmuv3_cfg mmu_cfg;
+ vm_paddr_t prtb, pgtb;
+ size_t pgd_pages;
+
+ TEST_ASSERT((vm->mode == VM_MODE_P52V52_4K) ||
+ (vm->mode == VM_MODE_P52V52_64K),
+ "Unsupported guest mode, mode: 0x%x", vm->mode);
+
+ prtb = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ vm->prtb = prtb;
+
+ pgd_pages = (1UL << (RADIX_PGD_INDEX_SIZE + 3)) >> vm->page_shift;
+ if (!pgd_pages)
+ pgd_pages = 1;
+ pgtb = vm_phy_pages_alloc_align(vm, pgd_pages, pgd_pages,
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ vm->pgd = pgtb;
+
+ /* Set the base page directory in the proc table */
+ set_radix_proc_table(vm, 0, pgtb);
+
+ if (vm->mode == VM_MODE_P52V52_4K)
+ mmu_cfg.process_table = prtb | 0x8000000000000000UL | 0x0; // 4K size
+ else /* vm->mode == VM_MODE_P52V52_64K */
+ mmu_cfg.process_table = prtb | 0x8000000000000000UL | 0x4; // 64K size
+ mmu_cfg.flags = KVM_PPC_MMUV3_RADIX | KVM_PPC_MMUV3_GTSE;
+
+ vm_ioctl(vm, KVM_PPC_CONFIGURE_V3_MMU, &mmu_cfg);
+}
+
+static int pt_shift(struct kvm_vm *vm, int level)
+{
+ switch (level) {
+ case 1:
+ return 13;
+ case 2:
+ case 3:
+ return 9;
+ case 4:
+ if (vm->mode == VM_MODE_P52V52_4K)
+ return 9;
+ else /* vm->mode == VM_MODE_P52V52_64K */
+ return 5;
+ default:
+ TEST_ASSERT(false, "Invalid page table level %d\n", level);
+ return 0;
+ }
+}
+
+static uint64_t pt_entry_coverage(struct kvm_vm *vm, int level)
+{
+ uint64_t size = vm->page_size;
+
+ if (level == 4)
+ return size;
+ size <<= pt_shift(vm, 4);
+ if (level == 3)
+ return size;
+ size <<= pt_shift(vm, 3);
+ if (level == 2)
+ return size;
+ size <<= pt_shift(vm, 2);
+ return size;
+}
+
+static int pt_idx(struct kvm_vm *vm, uint64_t vaddr, int level, uint64_t *nls)
+{
+ switch (level) {
+ case 1:
+ *nls = 0x9;
+ return (vaddr >> 39) & 0x1fff;
+ case 2:
+ *nls = 0x9;
+ return (vaddr >> 30) & 0x1ff;
+ case 3:
+ if (vm->mode == VM_MODE_P52V52_4K)
+ *nls = 0x9;
+ else /* vm->mode == VM_MODE_P52V52_64K */
+ *nls = 0x5;
+ return (vaddr >> 21) & 0x1ff;
+ case 4:
+ if (vm->mode == VM_MODE_P52V52_4K)
+ return (vaddr >> 12) & 0x1ff;
+ else /* vm->mode == VM_MODE_P52V52_64K */
+ return (vaddr >> 16) & 0x1f;
+ default:
+ TEST_ASSERT(false, "Invalid page table level %d\n", level);
+ return 0;
+ }
+}
+
+static uint64_t *virt_get_pte(struct kvm_vm *vm, vm_paddr_t pt,
+ uint64_t vaddr, int level, uint64_t *nls)
+{
+ int idx = pt_idx(vm, vaddr, level, nls);
+ uint64_t *ptep = addr_gpa2hva(vm, pt + idx*8);
+
+ return ptep;
+}
+
+#define PTE_VALID 0x8000000000000000ull
+#define PTE_LEAF 0x4000000000000000ull
+#define PTE_REFERENCED 0x0000000000000100ull
+#define PTE_CHANGED 0x0000000000000080ull
+#define PTE_PRIV 0x0000000000000008ull
+#define PTE_READ 0x0000000000000004ull
+#define PTE_RW 0x0000000000000002ull
+#define PTE_EXEC 0x0000000000000001ull
+#define PTE_PAGE_MASK 0x01fffffffffff000ull
+
+#define PDE_VALID PTE_VALID
+#define PDE_NLS 0x0000000000000011ull
+#define PDE_PT_MASK 0x0fffffffffffff00ull
+
+void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
+{
+ vm_paddr_t pt = vm->pgd;
+ uint64_t *ptep, pte;
+ int level;
+
+ for (level = 1; level <= 3; level++) {
+ uint64_t nls;
+ uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls);
+ uint64_t pde = be64_to_cpu(*pdep);
+ size_t pt_pages;
+
+ if (pde) {
+ TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF),
+ "Invalid PDE at level: %u gva: 0x%lx pde:0x%lx\n",
+ level, gva, pde);
+ pt = pde & PDE_PT_MASK;
+ continue;
+ }
+
+ pt_pages = (1ULL << (nls + 3)) >> vm->page_shift;
+ if (!pt_pages)
+ pt_pages = 1;
+ pt = vm_phy_pages_alloc_align(vm, pt_pages, pt_pages,
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ pde = PDE_VALID | nls | pt;
+ *pdep = cpu_to_be64(pde);
+ }
+
+ ptep = virt_get_pte(vm, pt, gva, level, NULL);
+ pte = be64_to_cpu(*ptep);
+
+ TEST_ASSERT(!pte, "PTE already present at level: %u gva: 0x%lx pte:0x%lx\n",
+ level, gva, pte);
+
+ pte = PTE_VALID | PTE_LEAF | PTE_REFERENCED | PTE_CHANGED |PTE_PRIV |
+ PTE_READ | PTE_RW | PTE_EXEC | (gpa & PTE_PAGE_MASK);
+ *ptep = cpu_to_be64(pte);
+}
+
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+ vm_paddr_t pt = vm->pgd;
+ uint64_t *ptep, pte;
+ int level;
+
+ for (level = 1; level <= 3; level++) {
+ uint64_t nls;
+ uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls);
+ uint64_t pde = be64_to_cpu(*pdep);
+
+ TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF),
+ "PDE not present at level: %u gva: 0x%lx pde:0x%lx\n",
+ level, gva, pde);
+ pt = pde & PDE_PT_MASK;
+ }
+
+ ptep = virt_get_pte(vm, pt, gva, level, NULL);
+ pte = be64_to_cpu(*ptep);
+
+ TEST_ASSERT(pte,
+ "PTE not present at level: %u gva: 0x%lx pte:0x%lx\n",
+ level, gva, pte);
+
+ TEST_ASSERT((pte & PTE_VALID) && (pte & PTE_LEAF) &&
+ (pte & PTE_READ) && (pte & PTE_RW) && (pte & PTE_EXEC),
+ "PTE not valid at level: %u gva: 0x%lx pte:0x%lx\n",
+ level, gva, pte);
+
+ return (pte & PTE_PAGE_MASK) + (gva & (vm->page_size - 1));
+}
+
+static void virt_dump_pt(FILE *stream, struct kvm_vm *vm, vm_paddr_t pt,
+ vm_vaddr_t va, int level, uint8_t indent)
+{
+ int size, idx;
+
+ size = 1U << (pt_shift(vm, level) + 3);
+
+ for (idx = 0; idx < size; idx += 8, va += pt_entry_coverage(vm, level)) {
+ uint64_t *page_table = addr_gpa2hva(vm, pt + idx);
+ uint64_t pte = be64_to_cpu(*page_table);
+
+ if (!(pte & PTE_VALID))
+ continue;
+
+ if (pte & PTE_LEAF) {
+ fprintf(stream,
+ "%*s PTE[%d] gVA:0x%016lx -> gRA:0x%016llx\n",
+ indent, "", idx/8, va, pte & PTE_PAGE_MASK);
+ } else {
+ fprintf(stream, "%*sPDE%d[%d] gVA:0x%016lx\n",
+ indent, "", level, idx/8, va);
+ virt_dump_pt(stream, vm, pte & PDE_PT_MASK, va,
+ level + 1, indent + 2);
+ }
+ }
+
+}
+
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+ vm_paddr_t pt = vm->pgd;
+
+ if (!vm->pgd_created)
+ return;
+
+ virt_dump_pt(stream, vm, pt, 0, 1, indent);
+}
+
+static unsigned long get_r2(void)
+{
+ unsigned long r2;
+
+ asm("mr %0,%%r2" : "=r"(r2));
+
+ return r2;
+}
+
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
+ void *guest_code)
+{
+ const size_t stack_size = SZ_64K;
+ vm_vaddr_t stack_vaddr, ex_regs_vaddr;
+ vm_paddr_t ex_regs_paddr;
+ struct ex_regs *ex_regs;
+ struct kvm_regs regs;
+ struct kvm_vcpu *vcpu;
+ uint64_t lpcr;
+
+ stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
+
+ ex_regs_vaddr = __vm_vaddr_alloc(vm, stack_size,
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
+ ex_regs_paddr = addr_gva2gpa(vm, ex_regs_vaddr);
+ ex_regs = addr_gpa2hva(vm, ex_regs_paddr);
+ ex_regs->vaddr = ex_regs_vaddr;
+
+ vcpu = __vm_vcpu_add(vm, vcpu_id);
+
+ vcpu_enable_cap(vcpu, KVM_CAP_PPC_PAPR, 1);
+
+ /* Setup guest registers */
+ vcpu_regs_get(vcpu, ®s);
+ vcpu_get_reg(vcpu, KVM_REG_PPC_LPCR_64, &lpcr);
+
+ regs.pc = (uintptr_t)guest_code;
+ regs.gpr[1] = stack_vaddr + stack_size - 256;
+ regs.gpr[2] = (uintptr_t)get_r2();
+ regs.gpr[12] = (uintptr_t)guest_code;
+ regs.gpr[13] = (uintptr_t)ex_regs_paddr;
+
+ regs.msr = MSR_SF | MSR_VEC | MSR_VSX | MSR_FP |
+ MSR_ME | MSR_IR | MSR_DR | MSR_RI;
+
+ if (BYTE_ORDER == LITTLE_ENDIAN) {
+ regs.msr |= MSR_LE;
+ lpcr |= LPCR_ILE;
+ } else {
+ lpcr &= ~LPCR_ILE;
+ }
+
+ vcpu_regs_set(vcpu, ®s);
+ vcpu_set_reg(vcpu, KVM_REG_PPC_LPCR_64, lpcr);
+
+ return vcpu;
+}
+
+void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
+{
+ va_list ap;
+ struct kvm_regs regs;
+ int i;
+
+ TEST_ASSERT(num >= 1 && num <= 5, "Unsupported number of args: %u\n",
+ num);
+
+ va_start(ap, num);
+ vcpu_regs_get(vcpu, ®s);
+
+ for (i = 0; i < num; i++)
+ regs.gpr[i + 3] = va_arg(ap, uint64_t);
+
+ vcpu_regs_set(vcpu, ®s);
+ va_end(ap);
+}
+
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+{
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vcpu, ®s);
+
+ fprintf(stream, "%*sNIA: 0x%016llx MSR: 0x%016llx\n",
+ indent, "", regs.pc, regs.msr);
+ fprintf(stream, "%*sLR: 0x%016llx CTR :0x%016llx\n",
+ indent, "", regs.lr, regs.ctr);
+ fprintf(stream, "%*sCR: 0x%08llx XER :0x%016llx\n",
+ indent, "", regs.cr, regs.xer);
+}
+
+void vm_init_descriptor_tables(struct kvm_vm *vm)
+{
+}
+
+void kvm_arch_vm_post_create(struct kvm_vm *vm)
+{
+ vm_paddr_t excp_paddr;
+ void *mem;
+
+ excp_paddr = vm_phy_page_alloc(vm, 0, vm->memslots[MEM_REGION_DATA]);
+
+ TEST_ASSERT(excp_paddr == 0,
+ "Interrupt vectors not allocated at gPA address 0: (0x%lx)",
+ excp_paddr);
+
+ mem = addr_gpa2hva(vm, excp_paddr);
+ memcpy(mem, __interrupts_start, __interrupts_end - __interrupts_start);
+}
+
+void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
+{
+ struct ucall uc;
+
+ if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED) {
+ vm_paddr_t ex_regs_paddr;
+ struct ex_regs *ex_regs;
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vcpu, ®s);
+ ex_regs_paddr = (vm_paddr_t)regs.gpr[13];
+ ex_regs = addr_gpa2hva(vcpu->vm, ex_regs_paddr);
+
+ TEST_FAIL("Unexpected interrupt in guest NIA:0x%016lx MSR:0x%016lx TRAP:0x%04x",
+ ex_regs->nia, ex_regs->msr, ex_regs->trap);
+ }
+}
+
+struct handler {
+ void (*fn)(struct ex_regs *regs);
+ int trap;
+};
+
+#define NR_HANDLERS 10
+static struct handler handlers[NR_HANDLERS];
+
+void route_interrupt(struct ex_regs *regs)
+{
+ int i;
+
+ for (i = 0; i < NR_HANDLERS; i++) {
+ if (handlers[i].trap == regs->trap) {
+ handlers[i].fn(regs);
+ return;
+ }
+ }
+
+ ucall(UCALL_UNHANDLED, 0);
+}
+
+void vm_install_exception_handler(struct kvm_vm *vm, int trap,
+ void (*fn)(struct ex_regs *))
+{
+ int i;
+
+ for (i = 0; i < NR_HANDLERS; i++) {
+ if (!handlers[i].trap || handlers[i].trap == trap) {
+ if (fn == NULL)
+ trap = 0; /* Clear handler */
+ handlers[i].trap = trap;
+ handlers[i].fn = fn;
+ sync_global_to_guest(vm, handlers[i]);
+ return;
+ }
+ }
+
+ TEST_FAIL("Out of exception handlers");
+}
+
+void kvm_selftest_arch_init(void)
+{
+ /*
+ * powerpc default mode is set by host page size and not static,
+ * so start by computing that early.
+ */
+ guest_modes_append_default();
+}
diff --git a/tools/testing/selftests/kvm/lib/powerpc/ucall.c b/tools/testing/selftests/kvm/lib/powerpc/ucall.c
new file mode 100644
index 000000000000..ce0ddde45fef
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/ucall.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ucall support. A ucall is a "hypercall to host userspace".
+ */
+#include "kvm_util.h"
+#include "hcall.h"
+
+void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+{
+}
+
+void ucall_arch_do_ucall(vm_vaddr_t uc)
+{
+ hcall2(H_UCALL, UCALL_R4_UCALL, (uintptr_t)(uc));
+}
+
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
+{
+ struct kvm_run *run = vcpu->run;
+
+ if (run->exit_reason == KVM_EXIT_PAPR_HCALL &&
+ run->papr_hcall.nr == H_UCALL) {
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vcpu, ®s);
+ if (regs.gpr[4] == UCALL_R4_UCALL)
+ return (void *)regs.gpr[5];
+ }
+ return NULL;
+}
diff --git a/tools/testing/selftests/kvm/powerpc/helpers.h b/tools/testing/selftests/kvm/powerpc/helpers.h
new file mode 100644
index 000000000000..8f60bb826830
--- /dev/null
+++ b/tools/testing/selftests/kvm/powerpc/helpers.h
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#ifndef SELFTEST_KVM_HELPERS_H
+#define SELFTEST_KVM_HELPERS_H
+
+#include "kvm_util.h"
+#include "processor.h"
+
+static inline void __handle_ucall(struct kvm_vcpu *vcpu, uint64_t expect, struct ucall *uc)
+{
+ uint64_t ret;
+ struct kvm_regs regs;
+
+ ret = get_ucall(vcpu, uc);
+ if (ret == expect)
+ return;
+
+ vcpu_regs_get(vcpu, ®s);
+ fprintf(stderr, "Guest failure at NIA:0x%016llx MSR:0x%016llx\n", regs.pc, regs.msr);
+ fprintf(stderr, "Expected ucall: %lu\n", expect);
+
+ if (ret == UCALL_ABORT)
+ REPORT_GUEST_ASSERT(*uc);
+ else
+ TEST_FAIL("Unexpected ucall: %lu exit_reason=%s",
+ ret, exit_reason_str(vcpu->run->exit_reason));
+}
+
+static inline void handle_ucall(struct kvm_vcpu *vcpu, uint64_t expect)
+{
+ struct ucall uc;
+
+ __handle_ucall(vcpu, expect, &uc);
+}
+
+static inline void host_sync(struct kvm_vcpu *vcpu, uint64_t sync)
+{
+ struct ucall uc;
+
+ __handle_ucall(vcpu, UCALL_SYNC, &uc);
+
+ TEST_ASSERT(uc.args[1] == (sync), "Sync failed host:%ld guest:%ld",
+ (long)sync, (long)uc.args[1]);
+}
+
+#endif
--
2.40.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 4/6] KVM: PPC: selftests: add selftests sanity tests
2023-06-08 3:24 [PATCH v3 0/6] KVM: selftests: add powerpc support Nicholas Piggin
` (2 preceding siblings ...)
2023-06-08 3:24 ` [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc Nicholas Piggin
@ 2023-06-08 3:24 ` Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 5/6] KVM: PPC: selftests: Add a TLBIEL virtualisation tester Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 6/6] KVM: PPC: selftests: Add interrupt performance tester Nicholas Piggin
5 siblings, 0 replies; 9+ messages in thread
From: Nicholas Piggin @ 2023-06-08 3:24 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: linuxppc-dev, Nicholas Piggin
Add tests that exercise very basic functions of the kvm selftests
framework, guest creation, ucalls, hcalls, copying data between guest
and host, interrupts and page faults.
These don't stress KVM so much as being useful when developing support
for powerpc.
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
tools/testing/selftests/kvm/Makefile | 2 +
.../selftests/kvm/include/powerpc/hcall.h | 2 +
.../testing/selftests/kvm/powerpc/null_test.c | 166 ++++++++++++++++++
.../selftests/kvm/powerpc/rtas_hcall.c | 136 ++++++++++++++
4 files changed, 306 insertions(+)
create mode 100644 tools/testing/selftests/kvm/powerpc/null_test.c
create mode 100644 tools/testing/selftests/kvm/powerpc/rtas_hcall.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 53cd3ce63dec..efb8700b9752 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -184,6 +184,8 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test
TEST_GEN_PROGS_riscv += set_memory_region_test
TEST_GEN_PROGS_riscv += kvm_binary_stats_test
+TEST_GEN_PROGS_powerpc += powerpc/null_test
+TEST_GEN_PROGS_powerpc += powerpc/rtas_hcall
TEST_GEN_PROGS_powerpc += access_tracking_perf_test
TEST_GEN_PROGS_powerpc += demand_paging_test
TEST_GEN_PROGS_powerpc += dirty_log_test
diff --git a/tools/testing/selftests/kvm/include/powerpc/hcall.h b/tools/testing/selftests/kvm/include/powerpc/hcall.h
index ba119f5a3fef..04c7d2d13020 100644
--- a/tools/testing/selftests/kvm/include/powerpc/hcall.h
+++ b/tools/testing/selftests/kvm/include/powerpc/hcall.h
@@ -12,6 +12,8 @@
#define UCALL_R4_UCALL 0x5715 // regular ucall, r5 contains ucall pointer
#define UCALL_R4_SIMPLE 0x0000 // simple exit usable by asm with no ucall data
+#define H_RTAS 0xf000
+
int64_t hcall0(uint64_t token);
int64_t hcall1(uint64_t token, uint64_t arg1);
int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2);
diff --git a/tools/testing/selftests/kvm/powerpc/null_test.c b/tools/testing/selftests/kvm/powerpc/null_test.c
new file mode 100644
index 000000000000..31db0b6becd6
--- /dev/null
+++ b/tools/testing/selftests/kvm/powerpc/null_test.c
@@ -0,0 +1,166 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Tests for guest creation, run, ucall, interrupt, and vm dumping.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "kselftest.h"
+#include "processor.h"
+#include "helpers.h"
+
+extern void guest_code_asm(void);
+asm(".global guest_code_asm");
+asm(".balign 4");
+asm("guest_code_asm:");
+asm("li 3,0"); // H_UCALL
+asm("li 4,0"); // UCALL_R4_SIMPLE
+asm("sc 1");
+
+static void test_asm(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ vm = vm_create_with_one_vcpu(&vcpu, guest_code_asm);
+
+ vcpu_run(vcpu);
+ handle_ucall(vcpu, UCALL_NONE);
+
+ kvm_vm_free(vm);
+}
+
+static void guest_code_ucall(void)
+{
+ GUEST_DONE();
+}
+
+static void test_ucall(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ vm = vm_create_with_one_vcpu(&vcpu, guest_code_ucall);
+
+ vcpu_run(vcpu);
+ handle_ucall(vcpu, UCALL_DONE);
+
+ kvm_vm_free(vm);
+}
+
+static void trap_handler(struct ex_regs *regs)
+{
+ GUEST_SYNC(1);
+ regs->nia += 4;
+}
+
+static void guest_code_trap(void)
+{
+ GUEST_SYNC(0);
+ asm volatile("trap");
+ GUEST_DONE();
+}
+
+static void test_trap(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ vm = vm_create_with_one_vcpu(&vcpu, guest_code_trap);
+ vm_install_exception_handler(vm, 0x700, trap_handler);
+
+ vcpu_run(vcpu);
+ host_sync(vcpu, 0);
+ vcpu_run(vcpu);
+ host_sync(vcpu, 1);
+ vcpu_run(vcpu);
+ handle_ucall(vcpu, UCALL_DONE);
+
+ vm_install_exception_handler(vm, 0x700, NULL);
+
+ kvm_vm_free(vm);
+}
+
+static void dsi_handler(struct ex_regs *regs)
+{
+ GUEST_SYNC(1);
+ regs->nia += 4;
+}
+
+static void guest_code_dsi(void)
+{
+ GUEST_SYNC(0);
+ asm volatile("stb %r0,0(0)");
+ GUEST_DONE();
+}
+
+static void test_dsi(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ vm = vm_create_with_one_vcpu(&vcpu, guest_code_dsi);
+ vm_install_exception_handler(vm, 0x300, dsi_handler);
+
+ vcpu_run(vcpu);
+ host_sync(vcpu, 0);
+ vcpu_run(vcpu);
+ host_sync(vcpu, 1);
+ vcpu_run(vcpu);
+ handle_ucall(vcpu, UCALL_DONE);
+
+ vm_install_exception_handler(vm, 0x300, NULL);
+
+ kvm_vm_free(vm);
+}
+
+static void test_dump(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ vm = vm_create_with_one_vcpu(&vcpu, guest_code_ucall);
+
+ vcpu_run(vcpu);
+ handle_ucall(vcpu, UCALL_DONE);
+
+ printf("Testing vm_dump...\n");
+ vm_dump(stderr, vm, 2);
+
+ kvm_vm_free(vm);
+}
+
+
+struct testdef {
+ const char *name;
+ void (*test)(void);
+} testlist[] = {
+ { "null asm test", test_asm},
+ { "null ucall test", test_ucall},
+ { "trap test", test_trap},
+ { "page fault test", test_dsi},
+ { "vm dump test", test_dump},
+};
+
+int main(int argc, char *argv[])
+{
+ int idx;
+
+ ksft_print_header();
+
+ ksft_set_plan(ARRAY_SIZE(testlist));
+
+ for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) {
+ testlist[idx].test();
+ ksft_test_result_pass("%s\n", testlist[idx].name);
+ }
+
+ ksft_finished(); /* Print results and exit() accordingly */
+}
diff --git a/tools/testing/selftests/kvm/powerpc/rtas_hcall.c b/tools/testing/selftests/kvm/powerpc/rtas_hcall.c
new file mode 100644
index 000000000000..05af22c711cb
--- /dev/null
+++ b/tools/testing/selftests/kvm/powerpc/rtas_hcall.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test the KVM H_RTAS hcall and copying buffers between guest and host.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "kselftest.h"
+#include "hcall.h"
+
+struct rtas_args {
+ __be32 token;
+ __be32 nargs;
+ __be32 nret;
+ __be32 args[16];
+ __be32 *rets; /* Pointer to return values in args[]. */
+};
+
+static void guest_code(void)
+{
+ struct rtas_args r;
+ int64_t rc;
+
+ r.token = cpu_to_be32(0xdeadbeef);
+ r.nargs = cpu_to_be32(3);
+ r.nret = cpu_to_be32(2);
+ r.rets = &r.args[3];
+ r.args[0] = cpu_to_be32(0x1000);
+ r.args[1] = cpu_to_be32(0x1001);
+ r.args[2] = cpu_to_be32(0x1002);
+ rc = hcall1(H_RTAS, (uint64_t)&r);
+ GUEST_ASSERT(rc == 0);
+ GUEST_ASSERT_1(be32_to_cpu(r.rets[0]) == 0xabc, be32_to_cpu(r.rets[0]));
+ GUEST_ASSERT_1(be32_to_cpu(r.rets[1]) == 0x123, be32_to_cpu(r.rets[1]));
+
+ GUEST_DONE();
+}
+
+int main(int argc, char *argv[])
+{
+ struct kvm_regs regs;
+ struct rtas_args *r;
+ vm_vaddr_t rtas_vaddr;
+ struct ucall uc;
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint64_t tmp;
+ int ret;
+
+ ksft_print_header();
+
+ ksft_set_plan(1);
+
+ /* Create VM */
+ vm = vm_create_with_one_vcpu(&vcpu, guest_code);
+
+ ret = _vcpu_run(vcpu);
+ TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
+ switch ((tmp = get_ucall(vcpu, &uc))) {
+ case UCALL_NONE:
+ break; // good
+ case UCALL_DONE:
+ TEST_FAIL("Unexpected final guest exit %lu\n", tmp);
+ break;
+ case UCALL_ABORT:
+ REPORT_GUEST_ASSERT_N(uc, "values: %lu (0x%lx)\n",
+ GUEST_ASSERT_ARG(uc, 0),
+ GUEST_ASSERT_ARG(uc, 0));
+ break;
+ default:
+ TEST_FAIL("Unexpected guest exit %lu\n", tmp);
+ }
+
+ TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_PAPR_HCALL,
+ "Expected PAPR_HCALL exit, got %s\n",
+ exit_reason_str(vcpu->run->exit_reason));
+ TEST_ASSERT(vcpu->run->papr_hcall.nr == H_RTAS,
+ "Expected H_RTAS exit, got %lld\n",
+ vcpu->run->papr_hcall.nr);
+
+ vcpu_regs_get(vcpu, ®s);
+ rtas_vaddr = regs.gpr[4];
+
+ r = addr_gva2hva(vm, rtas_vaddr);
+
+ TEST_ASSERT(r->token == cpu_to_be32(0xdeadbeef),
+ "Expected RTAS token 0xdeadbeef, got 0x%x\n",
+ be32_to_cpu(r->token));
+ TEST_ASSERT(r->nargs == cpu_to_be32(3),
+ "Expected RTAS nargs 3, got %u\n",
+ be32_to_cpu(r->nargs));
+ TEST_ASSERT(r->nret == cpu_to_be32(2),
+ "Expected RTAS nret 2, got %u\n",
+ be32_to_cpu(r->nret));
+ TEST_ASSERT(r->args[0] == cpu_to_be32(0x1000),
+ "Expected args[0] to be 0x1000, got 0x%x\n",
+ be32_to_cpu(r->args[0]));
+ TEST_ASSERT(r->args[1] == cpu_to_be32(0x1001),
+ "Expected args[1] to be 0x1001, got 0x%x\n",
+ be32_to_cpu(r->args[1]));
+ TEST_ASSERT(r->args[2] == cpu_to_be32(0x1002),
+ "Expected args[2] to be 0x1002, got 0x%x\n",
+ be32_to_cpu(r->args[2]));
+
+ r->args[3] = cpu_to_be32(0xabc);
+ r->args[4] = cpu_to_be32(0x123);
+
+ regs.gpr[3] = 0;
+ vcpu_regs_set(vcpu, ®s);
+
+ ret = _vcpu_run(vcpu);
+ TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
+ switch ((tmp = get_ucall(vcpu, &uc))) {
+ case UCALL_DONE:
+ break;
+ case UCALL_ABORT:
+ REPORT_GUEST_ASSERT_N(uc, "values: %lu (0x%lx)\n",
+ GUEST_ASSERT_ARG(uc, 0),
+ GUEST_ASSERT_ARG(uc, 0));
+ break;
+ default:
+ TEST_FAIL("Unexpected guest exit %lu\n", tmp);
+ }
+
+ kvm_vm_free(vm);
+
+ ksft_test_result_pass("%s\n", "rtas buffer copy test");
+ ksft_finished(); /* Print results and exit() accordingly */
+}
--
2.40.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 5/6] KVM: PPC: selftests: Add a TLBIEL virtualisation tester
2023-06-08 3:24 [PATCH v3 0/6] KVM: selftests: add powerpc support Nicholas Piggin
` (3 preceding siblings ...)
2023-06-08 3:24 ` [PATCH v3 4/6] KVM: PPC: selftests: add selftests sanity tests Nicholas Piggin
@ 2023-06-08 3:24 ` Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 6/6] KVM: PPC: selftests: Add interrupt performance tester Nicholas Piggin
5 siblings, 0 replies; 9+ messages in thread
From: Nicholas Piggin @ 2023-06-08 3:24 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: linuxppc-dev, Nicholas Piggin
TLBIEL virtualisation has been a source of difficulty. The TLBIEL
instruction operates on the TLB of the hardware thread which
executes it, but the behaviour expected by the guest environment
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/powerpc/processor.h | 7 +
.../selftests/kvm/lib/powerpc/processor.c | 108 +++-
.../selftests/kvm/powerpc/tlbiel_test.c | 508 ++++++++++++++++++
4 files changed, 621 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/kvm/powerpc/tlbiel_test.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index efb8700b9752..aa3a8ca676c2 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -186,6 +186,7 @@ TEST_GEN_PROGS_riscv += kvm_binary_stats_test
TEST_GEN_PROGS_powerpc += powerpc/null_test
TEST_GEN_PROGS_powerpc += powerpc/rtas_hcall
+TEST_GEN_PROGS_powerpc += powerpc/tlbiel_test
TEST_GEN_PROGS_powerpc += access_tracking_perf_test
TEST_GEN_PROGS_powerpc += demand_paging_test
TEST_GEN_PROGS_powerpc += dirty_log_test
diff --git a/tools/testing/selftests/kvm/include/powerpc/processor.h b/tools/testing/selftests/kvm/include/powerpc/processor.h
index ce5a23525dbd..92ef6476a9ef 100644
--- a/tools/testing/selftests/kvm/include/powerpc/processor.h
+++ b/tools/testing/selftests/kvm/include/powerpc/processor.h
@@ -7,6 +7,7 @@
#include <linux/compiler.h>
#include "ppc_asm.h"
+#include "kvm_util_base.h"
extern unsigned char __interrupts_start[];
extern unsigned char __interrupts_end[];
@@ -31,6 +32,12 @@ struct ex_regs {
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
void (*handler)(struct ex_regs *));
+vm_paddr_t virt_pt_duplicate(struct kvm_vm *vm);
+void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd);
+bool virt_wrprotect_pte(struct kvm_vm *vm, uint64_t gva);
+bool virt_wrenable_pte(struct kvm_vm *vm, uint64_t gva);
+bool virt_remap_pte(struct kvm_vm *vm, uint64_t gva, vm_paddr_t gpa);
+
static inline void cpu_relax(void)
{
asm volatile("" ::: "memory");
diff --git a/tools/testing/selftests/kvm/lib/powerpc/processor.c b/tools/testing/selftests/kvm/lib/powerpc/processor.c
index 02db2ff86da8..17ea440f9026 100644
--- a/tools/testing/selftests/kvm/lib/powerpc/processor.c
+++ b/tools/testing/selftests/kvm/lib/powerpc/processor.c
@@ -23,7 +23,7 @@ static void set_proc_table(struct kvm_vm *vm, int pid, uint64_t dw0, uint64_t dw
proc_table[pid * 2 + 1] = cpu_to_be64(dw1);
}
-static void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd)
+void set_radix_proc_table(struct kvm_vm *vm, int pid, vm_paddr_t pgd)
{
set_proc_table(vm, pid, pgd | RADIX_TREE_SIZE | RADIX_PGD_INDEX_SIZE, 0);
}
@@ -146,9 +146,69 @@ static uint64_t *virt_get_pte(struct kvm_vm *vm, vm_paddr_t pt,
#define PDE_NLS 0x0000000000000011ull
#define PDE_PT_MASK 0x0fffffffffffff00ull
-void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
+static uint64_t *virt_lookup_pte(struct kvm_vm *vm, uint64_t gva)
{
vm_paddr_t pt = vm->pgd;
+ uint64_t *ptep;
+ int level;
+
+ for (level = 1; level <= 3; level++) {
+ uint64_t nls;
+ uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls);
+ uint64_t pde = be64_to_cpu(*pdep);
+
+ if (pde) {
+ TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF),
+ "Invalid PDE at level: %u gva: 0x%lx pde:0x%lx\n",
+ level, gva, pde);
+ pt = pde & PDE_PT_MASK;
+ continue;
+ }
+
+ return NULL;
+ }
+
+ ptep = virt_get_pte(vm, pt, gva, level, NULL);
+
+ return ptep;
+}
+
+static bool virt_modify_pte(struct kvm_vm *vm, uint64_t gva, uint64_t clr, uint64_t set)
+{
+ uint64_t *ptep, pte;
+
+ ptep = virt_lookup_pte(vm, gva);
+ if (!ptep)
+ return false;
+
+ pte = be64_to_cpu(*ptep);
+ if (!(pte & PTE_VALID))
+ return false;
+
+ pte = (pte & ~clr) | set;
+ *ptep = cpu_to_be64(pte);
+
+ return true;
+}
+
+bool virt_remap_pte(struct kvm_vm *vm, uint64_t gva, vm_paddr_t gpa)
+{
+ return virt_modify_pte(vm, gva, PTE_PAGE_MASK, (gpa & PTE_PAGE_MASK));
+}
+
+bool virt_wrprotect_pte(struct kvm_vm *vm, uint64_t gva)
+{
+ return virt_modify_pte(vm, gva, PTE_RW, 0);
+}
+
+bool virt_wrenable_pte(struct kvm_vm *vm, uint64_t gva)
+{
+ return virt_modify_pte(vm, gva, 0, PTE_RW);
+}
+
+static void __virt_arch_pg_map(struct kvm_vm *vm, vm_paddr_t pgd, uint64_t gva, uint64_t gpa)
+{
+ vm_paddr_t pt = pgd;
uint64_t *ptep, pte;
int level;
@@ -187,6 +247,49 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
*ptep = cpu_to_be64(pte);
}
+void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
+{
+ __virt_arch_pg_map(vm, vm->pgd, gva, gpa);
+}
+
+static void __virt_pt_duplicate(struct kvm_vm *vm, vm_paddr_t pgd, vm_paddr_t pt, vm_vaddr_t va, int level)
+{
+ uint64_t *page_table;
+ int size, idx;
+
+ page_table = addr_gpa2hva(vm, pt);
+ size = 1U << pt_shift(vm, level);
+ for (idx = 0; idx < size; idx++) {
+ uint64_t pte = be64_to_cpu(page_table[idx]);
+ if (pte & PTE_VALID) {
+ if (pte & PTE_LEAF) {
+ __virt_arch_pg_map(vm, pgd, va, pte & PTE_PAGE_MASK);
+ } else {
+ __virt_pt_duplicate(vm, pgd, pte & PDE_PT_MASK, va, level + 1);
+ }
+ }
+ va += pt_entry_coverage(vm, level);
+ }
+}
+
+vm_paddr_t virt_pt_duplicate(struct kvm_vm *vm)
+{
+ vm_paddr_t pgtb;
+ uint64_t *page_table;
+ size_t pgd_pages;
+
+ pgd_pages = 1UL << ((RADIX_PGD_INDEX_SIZE + 3) >> vm->page_shift);
+ TEST_ASSERT(pgd_pages == 1, "PGD allocation must be single page");
+ pgtb = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ page_table = addr_gpa2hva(vm, pgtb);
+ memset(page_table, 0, vm->page_size * pgd_pages);
+
+ __virt_pt_duplicate(vm, pgtb, vm->pgd, 0, 1);
+
+ return pgtb;
+}
+
vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
{
vm_paddr_t pt = vm->pgd;
@@ -244,7 +347,6 @@ static void virt_dump_pt(FILE *stream, struct kvm_vm *vm, vm_paddr_t pt,
level + 1, indent + 2);
}
}
-
}
void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
diff --git a/tools/testing/selftests/kvm/powerpc/tlbiel_test.c b/tools/testing/selftests/kvm/powerpc/tlbiel_test.c
new file mode 100644
index 000000000000..63ffcff15617
--- /dev/null
+++ b/tools/testing/selftests/kvm/powerpc/tlbiel_test.c
@@ -0,0 +1,508 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test TLBIEL virtualisation. The TLBIEL instruction operates on cached
+ * translations of the hardware thread and/or core which executes it, but the
+ * behaviour required of the guest is that it should invalidate cached
+ * translations visible to the vCPU that executed it. The instruction can
+ * not be trapped by the hypervisor.
+ *
+ * This requires that when a vCPU is migrated to a different hardware thread,
+ * KVM must ensure that no potentially stale translations be visible on
+ * the new hardware thread. Implementing this has been a source of
+ * difficulty.
+ *
+ * This test tries to create and invalidate different kinds oftranslations
+ * while moving vCPUs between CPUs, and checking for stale translations.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sched.h>
+#include <sys/ioctl.h>
+#include <sys/time.h>
+#include <sys/sysinfo.h>
+#include <signal.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "kselftest.h"
+#include "processor.h"
+#include "helpers.h"
+
+static int nr_cpus;
+static int *cpu_array;
+
+static void set_cpu(int cpu)
+{
+ cpu_set_t set;
+
+ CPU_ZERO(&set);
+ CPU_SET(cpu, &set);
+
+ if (sched_setaffinity(0, sizeof(set), &set) == -1) {
+ perror("sched_setaffinity");
+ exit(1);
+ }
+}
+
+static void set_random_cpu(void)
+{
+ set_cpu(cpu_array[random() % nr_cpus]);
+}
+
+static void init_sched_cpu(void)
+{
+ cpu_set_t possible_mask;
+ int i, cnt, nproc;
+
+ nproc = get_nprocs_conf();
+
+ TEST_ASSERT(!sched_getaffinity(0, sizeof(possible_mask), &possible_mask),
+ "sched_getaffinity failed, errno = %d (%s)", errno, strerror(errno));
+
+ nr_cpus = CPU_COUNT(&possible_mask);
+ cpu_array = malloc(nr_cpus * sizeof(int));
+
+ cnt = 0;
+ for (i = 0; i < nproc; i++) {
+ if (CPU_ISSET(i, &possible_mask)) {
+ cpu_array[cnt] = i;
+ cnt++;
+ }
+ }
+}
+
+static volatile bool timeout;
+
+static void set_timer(int sec)
+{
+ struct itimerval timer;
+
+ timeout = false;
+
+ timer.it_value.tv_sec = sec;
+ timer.it_value.tv_usec = 0;
+ timer.it_interval = timer.it_value;
+ TEST_ASSERT(setitimer(ITIMER_REAL, &timer, NULL) == 0,
+ "setitimer failed %s", strerror(errno));
+}
+
+static void sigalrm_handler(int sig)
+{
+ timeout = true;
+}
+
+static void init_timers(void)
+{
+ TEST_ASSERT(signal(SIGALRM, sigalrm_handler) != SIG_ERR,
+ "Failed to register SIGALRM handler, errno = %d (%s)",
+ errno, strerror(errno));
+}
+
+static inline void virt_invalidate_tlb(uint64_t gva)
+{
+ unsigned long rb, rs;
+ unsigned long is = 2, ric = 0, prs = 1, r = 1;
+
+ rb = is << 10;
+ rs = 0;
+
+ asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync"
+ :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r)
+ : "memory");
+}
+
+static inline void virt_invalidate_pwc(uint64_t gva)
+{
+ unsigned long rb, rs;
+ unsigned long is = 2, ric = 1, prs = 1, r = 1;
+
+ rb = is << 10;
+ rs = 0;
+
+ asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync"
+ :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r)
+ : "memory");
+}
+
+static inline void virt_invalidate_all(uint64_t gva)
+{
+ unsigned long rb, rs;
+ unsigned long is = 2, ric = 2, prs = 1, r = 1;
+
+ rb = is << 10;
+ rs = 0;
+
+ asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync"
+ :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r)
+ : "memory");
+}
+
+static inline void virt_invalidate_page(uint64_t gva)
+{
+ unsigned long rb, rs;
+ unsigned long is = 0, ric = 0, prs = 1, r = 1;
+ unsigned long ap = 0x5;
+ unsigned long epn = gva & ~0xffffUL;
+ unsigned long pid = 0;
+
+ rb = epn | (is << 10) | (ap << 5);
+ rs = pid << 32;
+
+ asm volatile("ptesync ; .machine push ; .machine power9 ; tlbiel %0,%1,%2,%3,%4 ; .machine pop ; ptesync"
+ :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r)
+ : "memory");
+}
+
+enum {
+ SYNC_BEFORE_LOAD1,
+ SYNC_BEFORE_LOAD2,
+ SYNC_BEFORE_STORE,
+ SYNC_BEFORE_INVALIDATE,
+ SYNC_DSI,
+};
+
+static void remap_dsi_handler(struct ex_regs *regs)
+{
+ GUEST_ASSERT(0);
+}
+
+#define PAGE1_VAL 0x1234567890abcdef
+#define PAGE2_VAL 0x5c5c5c5c5c5c5c5c
+
+static void remap_guest_code(vm_vaddr_t page)
+{
+ unsigned long *mem = (void *)page;
+
+ for (;;) {
+ unsigned long tmp;
+
+ GUEST_SYNC(SYNC_BEFORE_LOAD1);
+ asm volatile("ld %0,%1" : "=r"(tmp) : "m"(*mem));
+ GUEST_ASSERT(tmp == PAGE1_VAL);
+ GUEST_SYNC(SYNC_BEFORE_INVALIDATE);
+ virt_invalidate_page(page);
+ GUEST_SYNC(SYNC_BEFORE_LOAD2);
+ asm volatile("ld %0,%1" : "=r"(tmp) : "m"(*mem));
+ GUEST_ASSERT(tmp == PAGE2_VAL);
+ GUEST_SYNC(SYNC_BEFORE_INVALIDATE);
+ virt_invalidate_page(page);
+ }
+}
+
+static void remap_test(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ vm_vaddr_t vaddr;
+ vm_paddr_t pages[2];
+ uint64_t *hostptr;
+
+ /* Create VM */
+ vm = vm_create_with_one_vcpu(&vcpu, remap_guest_code);
+ vm_install_exception_handler(vm, 0x300, remap_dsi_handler);
+
+ vaddr = vm_vaddr_alloc_page(vm);
+ pages[0] = addr_gva2gpa(vm, vaddr);
+ pages[1] = vm_phy_page_alloc(vm, 0, vm->memslots[MEM_REGION_DATA]);
+
+ hostptr = addr_gpa2hva(vm, pages[0]);
+ *hostptr = PAGE1_VAL;
+
+ hostptr = addr_gpa2hva(vm, pages[1]);
+ *hostptr = PAGE2_VAL;
+
+ vcpu_args_set(vcpu, 1, vaddr);
+
+ set_random_cpu();
+ set_timer(10);
+
+ while (!timeout) {
+ vcpu_run(vcpu);
+
+ host_sync(vcpu, SYNC_BEFORE_LOAD1);
+ set_random_cpu();
+ vcpu_run(vcpu);
+
+ host_sync(vcpu, SYNC_BEFORE_INVALIDATE);
+ set_random_cpu();
+ TEST_ASSERT(virt_remap_pte(vm, vaddr, pages[1]), "Remap page1 failed");
+ vcpu_run(vcpu);
+
+ host_sync(vcpu, SYNC_BEFORE_LOAD2);
+ set_random_cpu();
+ vcpu_run(vcpu);
+
+ host_sync(vcpu, SYNC_BEFORE_INVALIDATE);
+ TEST_ASSERT(virt_remap_pte(vm, vaddr, pages[0]), "Remap page0 failed");
+ set_random_cpu();
+ }
+
+ vm_install_exception_handler(vm, 0x300, NULL);
+
+ kvm_vm_free(vm);
+}
+
+static void wrprotect_dsi_handler(struct ex_regs *regs)
+{
+ GUEST_SYNC(SYNC_DSI);
+ regs->nia += 4;
+}
+
+static void wrprotect_guest_code(vm_vaddr_t page)
+{
+ volatile char *mem = (void *)page;
+
+ for (;;) {
+ GUEST_SYNC(SYNC_BEFORE_STORE);
+ asm volatile("stb %1,%0" : "=m"(*mem) : "r"(1));
+ GUEST_SYNC(SYNC_BEFORE_INVALIDATE);
+ virt_invalidate_page(page);
+ }
+}
+
+static void wrprotect_test(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ vm_vaddr_t page;
+ void *hostptr;
+
+ /* Create VM */
+ vm = vm_create_with_one_vcpu(&vcpu, wrprotect_guest_code);
+ vm_install_exception_handler(vm, 0x300, wrprotect_dsi_handler);
+
+ page = vm_vaddr_alloc_page(vm);
+ hostptr = addr_gva2hva(vm, page);
+ memset(hostptr, 0, vm->page_size);
+
+ vcpu_args_set(vcpu, 1, page);
+
+ set_random_cpu();
+ set_timer(10);
+
+ while (!timeout) {
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_STORE);
+
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_INVALIDATE);
+
+ TEST_ASSERT(virt_wrprotect_pte(vm, page), "Wrprotect page failed");
+ /* Invalidate on different CPU */
+ set_random_cpu();
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_STORE);
+
+ /* Store on different CPU */
+ set_random_cpu();
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_DSI);
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_INVALIDATE);
+
+ TEST_ASSERT(virt_wrenable_pte(vm, page), "Wrenable page failed");
+
+ /* Invalidate on different CPU when we go around */
+ set_random_cpu();
+ }
+
+ vm_install_exception_handler(vm, 0x300, NULL);
+
+ kvm_vm_free(vm);
+}
+
+static void wrp_mt_dsi_handler(struct ex_regs *regs)
+{
+ GUEST_SYNC(SYNC_DSI);
+ regs->nia += 4;
+}
+
+static void wrp_mt_guest_code(vm_vaddr_t page, bool invalidates)
+{
+ volatile char *mem = (void *)page;
+
+ for (;;) {
+ GUEST_SYNC(SYNC_BEFORE_STORE);
+ asm volatile("stb %1,%0" : "=m"(*mem) : "r"(1));
+ if (invalidates) {
+ GUEST_SYNC(SYNC_BEFORE_INVALIDATE);
+ virt_invalidate_page(page);
+ }
+ }
+}
+
+static void wrp_mt_test(void)
+{
+ struct kvm_vcpu *vcpu[2];
+ struct kvm_vm *vm;
+ vm_vaddr_t page;
+ void *hostptr;
+
+ /* Create VM */
+ vm = vm_create_with_vcpus(2, wrp_mt_guest_code, vcpu);
+ vm_install_exception_handler(vm, 0x300, wrp_mt_dsi_handler);
+
+ page = vm_vaddr_alloc_page(vm);
+ hostptr = addr_gva2hva(vm, page);
+ memset(hostptr, 0, vm->page_size);
+
+ vcpu_args_set(vcpu[0], 2, page, 1);
+ vcpu_args_set(vcpu[1], 2, page, 0);
+
+ set_random_cpu();
+ set_timer(10);
+
+ while (!timeout) {
+ /* Run vcpu[1] only when page is writable, should never fault */
+ vcpu_run(vcpu[1]);
+ host_sync(vcpu[1], SYNC_BEFORE_STORE);
+
+ vcpu_run(vcpu[0]);
+ host_sync(vcpu[0], SYNC_BEFORE_STORE);
+
+ vcpu_run(vcpu[0]);
+ host_sync(vcpu[0], SYNC_BEFORE_INVALIDATE);
+
+ TEST_ASSERT(virt_wrprotect_pte(vm, page), "Wrprotect page failed");
+ /* Invalidate on different CPU */
+ set_random_cpu();
+ vcpu_run(vcpu[0]);
+ host_sync(vcpu[0], SYNC_BEFORE_STORE);
+
+ /* Store on different CPU */
+ set_random_cpu();
+ vcpu_run(vcpu[0]);
+ host_sync(vcpu[0], SYNC_DSI);
+ vcpu_run(vcpu[0]);
+ host_sync(vcpu[0], SYNC_BEFORE_INVALIDATE);
+
+ TEST_ASSERT(virt_wrenable_pte(vm, page), "Wrenable page failed");
+ /* Invalidate on different CPU when we go around */
+ set_random_cpu();
+ }
+
+ vm_install_exception_handler(vm, 0x300, NULL);
+
+ kvm_vm_free(vm);
+}
+
+static void proctbl_dsi_handler(struct ex_regs *regs)
+{
+ GUEST_SYNC(SYNC_DSI);
+ regs->nia += 4;
+}
+
+static void proctbl_guest_code(vm_vaddr_t page)
+{
+ volatile char *mem = (void *)page;
+
+ for (;;) {
+ GUEST_SYNC(SYNC_BEFORE_STORE);
+ asm volatile("stb %1,%0" : "=m"(*mem) : "r"(1));
+ GUEST_SYNC(SYNC_BEFORE_INVALIDATE);
+ virt_invalidate_all(page);
+ }
+}
+
+static void proctbl_test(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ vm_vaddr_t page;
+ vm_paddr_t orig_pgd;
+ vm_paddr_t alternate_pgd;
+ void *hostptr;
+
+ /* Create VM */
+ vm = vm_create_with_one_vcpu(&vcpu, proctbl_guest_code);
+ vm_install_exception_handler(vm, 0x300, proctbl_dsi_handler);
+
+ page = vm_vaddr_alloc_page(vm);
+ hostptr = addr_gva2hva(vm, page);
+ memset(hostptr, 0, vm->page_size);
+
+ orig_pgd = vm->pgd;
+ alternate_pgd = virt_pt_duplicate(vm);
+
+ /* Write protect the original PTE */
+ TEST_ASSERT(virt_wrprotect_pte(vm, page), "Wrprotect page failed");
+
+ vm->pgd = alternate_pgd;
+ set_radix_proc_table(vm, 0, vm->pgd);
+
+ vcpu_args_set(vcpu, 1, page);
+
+ set_random_cpu();
+ set_timer(10);
+
+ while (!timeout) {
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_STORE);
+
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_INVALIDATE);
+ /* Writeable store succeeds */
+
+ /* Swap page tables to write protected one */
+ vm->pgd = orig_pgd;
+ set_radix_proc_table(vm, 0, vm->pgd);
+
+ /* Invalidate on different CPU */
+ set_random_cpu();
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_STORE);
+
+ /* Store on different CPU */
+ set_random_cpu();
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_DSI);
+ vcpu_run(vcpu);
+ host_sync(vcpu, SYNC_BEFORE_INVALIDATE);
+
+ /* Swap page tables to write enabled one */
+ vm->pgd = alternate_pgd;
+ set_radix_proc_table(vm, 0, vm->pgd);
+
+ /* Invalidate on different CPU when we go around */
+ set_random_cpu();
+ }
+ vm->pgd = orig_pgd;
+ set_radix_proc_table(vm, 0, vm->pgd);
+
+ vm_install_exception_handler(vm, 0x300, NULL);
+
+ kvm_vm_free(vm);
+}
+
+struct testdef {
+ const char *name;
+ void (*test)(void);
+} testlist[] = {
+ { "tlbiel wrprotect test", wrprotect_test},
+ { "tlbiel wrprotect 2-vCPU test", wrp_mt_test},
+ { "tlbiel process table update test", proctbl_test},
+ { "tlbiel remap test", remap_test},
+};
+
+int main(int argc, char *argv[])
+{
+ int idx;
+
+ ksft_print_header();
+
+ ksft_set_plan(ARRAY_SIZE(testlist));
+
+ init_sched_cpu();
+ init_timers();
+
+ for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) {
+ testlist[idx].test();
+ ksft_test_result_pass("%s\n", testlist[idx].name);
+ }
+
+ ksft_finished(); /* Print results and exit() accordingly */
+}
--
2.40.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 6/6] KVM: PPC: selftests: Add interrupt performance tester
2023-06-08 3:24 [PATCH v3 0/6] KVM: selftests: add powerpc support Nicholas Piggin
` (4 preceding siblings ...)
2023-06-08 3:24 ` [PATCH v3 5/6] KVM: PPC: selftests: Add a TLBIEL virtualisation tester Nicholas Piggin
@ 2023-06-08 3:24 ` Nicholas Piggin
5 siblings, 0 replies; 9+ messages in thread
From: Nicholas Piggin @ 2023-06-08 3:24 UTC (permalink / raw)
To: kvm, Paolo Bonzini; +Cc: linuxppc-dev, Nicholas Piggin
Add a little perf tester for interrupts that go to guest, host, and
userspace.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/powerpc/interrupt_perf.c | 199 ++++++++++++++++++
2 files changed, 200 insertions(+)
create mode 100644 tools/testing/selftests/kvm/powerpc/interrupt_perf.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index aa3a8ca676c2..834f98971b0c 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -184,6 +184,7 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test
TEST_GEN_PROGS_riscv += set_memory_region_test
TEST_GEN_PROGS_riscv += kvm_binary_stats_test
+TEST_GEN_PROGS_powerpc += powerpc/interrupt_perf
TEST_GEN_PROGS_powerpc += powerpc/null_test
TEST_GEN_PROGS_powerpc += powerpc/rtas_hcall
TEST_GEN_PROGS_powerpc += powerpc/tlbiel_test
diff --git a/tools/testing/selftests/kvm/powerpc/interrupt_perf.c b/tools/testing/selftests/kvm/powerpc/interrupt_perf.c
new file mode 100644
index 000000000000..50d078899e22
--- /dev/null
+++ b/tools/testing/selftests/kvm/powerpc/interrupt_perf.c
@@ -0,0 +1,199 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test basic guest interrupt/exit performance.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sched.h>
+#include <sys/ioctl.h>
+#include <sys/time.h>
+#include <sys/sysinfo.h>
+#include <signal.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "kselftest.h"
+#include "processor.h"
+#include "helpers.h"
+#include "hcall.h"
+
+static bool timeout;
+static unsigned long count;
+static struct kvm_vm *kvm_vm;
+
+static void set_timer(int sec)
+{
+ struct itimerval timer;
+
+ timeout = false;
+
+ timer.it_value.tv_sec = sec;
+ timer.it_value.tv_usec = 0;
+ timer.it_interval = timer.it_value;
+ TEST_ASSERT(setitimer(ITIMER_REAL, &timer, NULL) == 0,
+ "setitimer failed %s", strerror(errno));
+}
+
+static void sigalrm_handler(int sig)
+{
+ timeout = true;
+ sync_global_to_guest(kvm_vm, timeout);
+}
+
+static void init_timers(void)
+{
+ TEST_ASSERT(signal(SIGALRM, sigalrm_handler) != SIG_ERR,
+ "Failed to register SIGALRM handler, errno = %d (%s)",
+ errno, strerror(errno));
+}
+
+static void program_interrupt_handler(struct ex_regs *regs)
+{
+ regs->nia += 4;
+}
+
+static void program_interrupt_guest_code(void)
+{
+ unsigned long nr = 0;
+
+ while (!timeout) {
+ asm volatile("trap");
+ nr++;
+ barrier();
+ }
+ count = nr;
+
+ GUEST_DONE();
+}
+
+static void program_interrupt_test(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ /* Create VM */
+ vm = vm_create_with_one_vcpu(&vcpu, program_interrupt_guest_code);
+ kvm_vm = vm;
+ vm_install_exception_handler(vm, 0x700, program_interrupt_handler);
+
+ set_timer(1);
+
+ while (!timeout) {
+ vcpu_run(vcpu);
+ barrier();
+ }
+
+ sync_global_from_guest(vm, count);
+
+ kvm_vm = NULL;
+ vm_install_exception_handler(vm, 0x700, NULL);
+
+ kvm_vm_free(vm);
+
+ printf("%lu guest interrupts per second\n", count);
+ count = 0;
+}
+
+static void heai_guest_code(void)
+{
+ unsigned long nr = 0;
+
+ while (!timeout) {
+ asm volatile(".long 0");
+ nr++;
+ barrier();
+ }
+ count = nr;
+
+ GUEST_DONE();
+}
+
+static void heai_test(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ /* Create VM */
+ vm = vm_create_with_one_vcpu(&vcpu, heai_guest_code);
+ kvm_vm = vm;
+ vm_install_exception_handler(vm, 0x700, program_interrupt_handler);
+
+ set_timer(1);
+
+ while (!timeout) {
+ vcpu_run(vcpu);
+ barrier();
+ }
+
+ sync_global_from_guest(vm, count);
+
+ kvm_vm = NULL;
+ vm_install_exception_handler(vm, 0x700, NULL);
+
+ kvm_vm_free(vm);
+
+ printf("%lu guest exits per second\n", count);
+ count = 0;
+}
+
+static void hcall_guest_code(void)
+{
+ for (;;)
+ hcall0(H_RTAS);
+}
+
+static void hcall_test(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ /* Create VM */
+ vm = vm_create_with_one_vcpu(&vcpu, hcall_guest_code);
+ kvm_vm = vm;
+
+ set_timer(1);
+
+ while (!timeout) {
+ vcpu_run(vcpu);
+ count++;
+ barrier();
+ }
+
+ kvm_vm = NULL;
+
+ kvm_vm_free(vm);
+
+ printf("%lu KVM exits per second\n", count);
+ count = 0;
+}
+
+struct testdef {
+ const char *name;
+ void (*test)(void);
+} testlist[] = {
+ { "guest interrupt test", program_interrupt_test},
+ { "guest exit test", heai_test},
+ { "KVM exit test", hcall_test},
+};
+
+int main(int argc, char *argv[])
+{
+ int idx;
+
+ ksft_print_header();
+
+ ksft_set_plan(ARRAY_SIZE(testlist));
+
+ init_timers();
+
+ for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) {
+ testlist[idx].test();
+ ksft_test_result_pass("%s\n", testlist[idx].name);
+ }
+
+ ksft_finished(); /* Print results and exit() accordingly */
+}
--
2.40.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc
2023-06-08 3:24 ` [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc Nicholas Piggin
@ 2023-06-14 0:20 ` Joel Stanley
2023-08-02 22:44 ` Sean Christopherson
1 sibling, 0 replies; 9+ messages in thread
From: Joel Stanley @ 2023-06-14 0:20 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: Paolo Bonzini, linuxppc-dev, kvm
[-- Attachment #1: Type: text/plain, Size: 5028 bytes --]
On Thu, 8 Jun 2023 at 03:28, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> Implement KVM selftests support for powerpc (Book3S-64).
>
> ucalls are implemented with an unsuppored PAPR hcall number which will
> always cause KVM to exit to userspace.
>
> Virtual memory is implemented for the radix MMU, and only a base page
> size is supported (both 4K and 64K).
>
> Guest interrupts are taken in real-mode, so require a page allocated at
> gRA 0x0. Interrupt entry is complicated because gVA:gRA is not 1:1 mapped
> (like the kernel is), so the MMU can not just just be switched on and
> off.
I saw a few failures on a power9 running Ubuntu's 6.2.0-20-generic kernel:
# selftests: kvm: kvm_create_max_vcpus
# KVM_CAP_MAX_VCPU_ID: 16384
# KVM_CAP_MAX_VCPUS: 2048
# Testing creating 2048 vCPUs, with IDs 0...2047.
# Testing creating 2048 vCPUs, with IDs 14336...16383.
# ==== Test Assertion Failure ====
# lib/kvm_util.c:1221: vcpu->fd >= 0
# pid=40390 tid=40390 errno=22 - Invalid argument
# 1 0x0000000010006903: __vm_vcpu_add at kvm_util.c:1221
# 2 0x0000000010002e53: test_vcpu_creation at
kvm_create_max_vcpus.c:35 (discriminator 3)
# 3 0x0000000010002953: main at kvm_create_max_vcpus.c:90
# 4 0x0000795fb3224c23: ?? ??:0
# 5 0x0000795fb3224e6b: ?? ??:0
# KVM_CREATE_VCPU failed, rc: -1 errno: 22 (Invalid argument)
not ok 10 selftests: kvm: kvm_create_max_vcpus # exit=254
# selftests: kvm: max_guest_memory_test
# No guest physical pages available, paddr_min: 0x180000 page_size:
0x10000 memslot: 0 num_pages: 1 align: 1
# ---- vm dump ----
# mode: 0xc
# fd: 6
# page_size: 0x10000
# Mem Regions:
# guest_phys: 0x0 size: 0x4200000 host_virt: 0x7ce4c0800000
# unused_phy_pages: 0x1
# Mapped Virtual Pages:
# 0x1, 0xac:0x183, 0x1000:0x1003
# pgd_created: 1
# Virtual Translation Tables:
# Virtual Translation Tables:
# PDE1[0] gVA:0x0000000000000000
# PDE2[0] gVA:0x0000000000000000
# PDE3[0] gVA:0x0000000000000000
# PTE[1] gVA:0x0000000000010000 -> gRA:0x0000000000060000
# PDE3[5] gVA:0x0000000000a00000
# PTE[12] gVA:0x0000000000ac0000 -> gRA:0x0000000000070000
and then finally:
# ==== Test Assertion Failure ====
# lib/kvm_util.c:1962: false
# pid=40446 tid=40446 errno=0 - Success
# 1 0x0000000010008f83: vm_phy_pages_alloc_align at kvm_util.c:1962
# 2 0x00000000100110d3: __virt_arch_pg_map at processor.c:232
# 3 0x0000000010002bcf: virt_pg_map at kvm_util_base.h:877
(discriminator 3)
# 4 (inlined by) main at max_guest_memory_test.c:249 (discriminator 3)
# 5 0x00007ce4c4a24c23: ?? ??:0
# 6 0x00007ce4c4a24e6b: ?? ??:0
# false
not ok 12 selftests: kvm: max_guest_memory_test # exit=254
# selftests: kvm: rseq_test
# ==== Test Assertion Failure ====
# rseq_test.c:258: i > (NR_TASK_MIGRATIONS / 2)
# pid=40529 tid=40529 errno=4 - Interrupted system call
# 1 0x0000000010002e57: main at rseq_test.c:258
# 2 0x00007369e2824c23: ?? ??:0
# 3 0x00007369e2824e6b: ?? ??:0
# Only performed 32590 KVM_RUNs, task stalled too much?
#
not ok 15 selftests: kvm: rseq_test # exit=254
I have attached the log from the full test run.
>
> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> +#ifdef __powerpc__
> + {
> + TEST_ASSERT(kvm_check_cap(KVM_CAP_PPC_MMU_RADIX),
> + "Radix MMU not available, KVM selftests "
> + "does not support Hash MMU!");
Back on the power8 system, this produces a backtrace along with the warning:
TAP version 13
1..17
# selftests: kvm: interrupt_perf
# ==== Test Assertion Failure ====
# lib/guest_modes.c:95: kvm_check_cap(KVM_CAP_PPC_MMU_RADIX)
# pid=3800487 tid=3800487 errno=0 - Success
# 1 0x0000000010003d57: guest_modes_append_default at guest_modes.c:95
# 2 0x0000000010011d67: kvm_selftest_arch_init at processor.c:540
# 3 0x00000000100029af: kvm_selftest_init at kvm_util.c:2178
# 4 0x000000001001325b: __libc_csu_init at ??:?
# 5 0x0000742d64f54c5b: ?? ??:0
# 6 0x0000742d64f54ea3: ?? ??:0
# Radix MMU not available, KVM selftests does not support Hash MMU!
not ok 1 selftests: kvm: interrupt_perf # exit=254
You could instead use TEST_REQUIRE:
TAP version 131..17
# selftests: kvm: interrupt_perf
# 1..0 # SKIP - Requirement not met: kvm_check_cap(KVM_CAP_PPC_MMU_RADIX)
ok 1 selftests: kvm: interrupt_perf # SKIP
> + /* Radix guest EA and RA are 52-bit on POWER9 and POWER10 */
> + if (sysconf(_SC_PAGESIZE) == 4096)
> + vm_mode_default = VM_MODE_P52V52_4K;
> + else
> + vm_mode_default = VM_MODE_P52V52_64K;
> + guest_mode_append(VM_MODE_P52V52_4K, true, true);
> + guest_mode_append(VM_MODE_P52V52_64K, true, true);
> + }
> +#endif
> }
[-- Attachment #2: kvm-selftests-powerpc-6.2.txt.gz --]
[-- Type: application/gzip, Size: 168609 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc
2023-06-08 3:24 ` [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc Nicholas Piggin
2023-06-14 0:20 ` Joel Stanley
@ 2023-08-02 22:44 ` Sean Christopherson
1 sibling, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2023-08-02 22:44 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: Paolo Bonzini, linuxppc-dev, kvm
On Thu, Jun 08, 2023, Nicholas Piggin wrote:
> diff --git a/tools/testing/selftests/kvm/lib/powerpc/ucall.c b/tools/testing/selftests/kvm/lib/powerpc/ucall.c
> new file mode 100644
> index 000000000000..ce0ddde45fef
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/powerpc/ucall.c
> @@ -0,0 +1,30 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ucall support. A ucall is a "hypercall to host userspace".
> + */
> +#include "kvm_util.h"
> +#include "hcall.h"
> +
> +void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
> +{
> +}
> +
> +void ucall_arch_do_ucall(vm_vaddr_t uc)
> +{
> + hcall2(H_UCALL, UCALL_R4_UCALL, (uintptr_t)(uc));
> +}
FYI, the ucall stuff will silently conflict with treewide (where KVM selftests is
the treechanges that I've queued[*]. It probably makes sense for the initial PPC
support to go through the KVM tree anyways, so I'd be more than happy to grab this
series via kvm-x86/selftests if you're willing to do the code changes (should be
minor, knock wood). Alternatively, the immutable tag I'm planning on creating
could be merged into the PPC tree, but that seems like overkill.
Either way, please Cc me on the next version (assuming there is a next version),
if only so that I can give you an early heads up if/when the next treewide change
alongs ;-)
[*] https://lore.kernel.org/all/169101267140.1755771.17089576255751273053.b4-ty@google.com
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-08-02 22:45 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-08 3:24 [PATCH v3 0/6] KVM: selftests: add powerpc support Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 1/6] KVM: selftests: Move pgd_created check into virt_pgd_alloc Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 2/6] KVM: selftests: Add aligned guest physical page allocator Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 3/6] KVM: PPC: selftests: add support for powerpc Nicholas Piggin
2023-06-14 0:20 ` Joel Stanley
2023-08-02 22:44 ` Sean Christopherson
2023-06-08 3:24 ` [PATCH v3 4/6] KVM: PPC: selftests: add selftests sanity tests Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 5/6] KVM: PPC: selftests: Add a TLBIEL virtualisation tester Nicholas Piggin
2023-06-08 3:24 ` [PATCH v3 6/6] KVM: PPC: selftests: Add interrupt performance tester Nicholas Piggin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).