* [RFC 0/4] KVM: selftests: add powerpc support
@ 2026-05-15 16:04 Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 1/4] KVM: selftests: Move pgd_created check into virt_pgd_alloc Ritesh Harjani (IBM)
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Ritesh Harjani (IBM) @ 2026-05-15 16:04 UTC (permalink / raw)
To: kvm
Cc: linuxppc-dev, Madhavan Srinivasan, Harsh Prateek Bora,
Christophe Leroy, Venkat Rao Bagalkote, Nicholas Piggin,
linux-kernel, Ritesh Harjani (IBM)
Hi All,
This series primarly adds KVM selftests support for powerpc (64-bit, BookS,
radix MMU).
This patch series is originally Nick's work. I have mainly only rebased it on
the latest upstream tree. Since the rebase required few changes to all four
patches, I have dropped the earlier Acked-by from Michael Ellerman.
Since the last series was posted three years ago [1], I am resetting the
version to RFC and posting an early version (few tests still pending) for
getting any early review comments. BTW, I ran this on P9 (PowerNV) with radix
and haven't found any regressions so far.
Note that I am planning to run this selftests with different configurations as
well on PowerPC and will share the test results soon. This rebase was done as
part of a larger effort to improve the selftests infrastructure for Linux on
PowerPC tree. Thanks to Harsh and Maddy for their help on this.
[1]: https://lore.kernel.org/all/20231120122920.293076-1-npiggin@gmail.com/
Nicholas Piggin (4):
KVM: selftests: Move pgd_created check into virt_pgd_alloc
KVM: selftests: Add aligned guest physical page allocator
KVM: PPC: selftests: add support for powerpc
KVM: PPC: selftests: powerpc enable kvm_create_max_vcpus test
MAINTAINERS | 2 +
tools/testing/selftests/kvm/Makefile | 2 +-
tools/testing/selftests/kvm/Makefile.kvm | 10 +
.../testing/selftests/kvm/include/kvm_util.h | 34 +-
.../selftests/kvm/include/powerpc/hcall.h | 17 +
.../kvm/include/powerpc/kvm_util_arch.h | 22 +
.../selftests/kvm/include/powerpc/ppc_asm.h | 32 ++
.../selftests/kvm/include/powerpc/processor.h | 38 ++
.../selftests/kvm/include/powerpc/ucall.h | 21 +
.../selftests/kvm/kvm_create_max_vcpus.c | 9 +
.../selftests/kvm/lib/arm64/processor.c | 4 -
tools/testing/selftests/kvm/lib/guest_modes.c | 20 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 41 +-
.../selftests/kvm/lib/loongarch/processor.c | 4 -
.../selftests/kvm/lib/powerpc/handlers.S | 93 ++++
.../testing/selftests/kvm/lib/powerpc/hcall.c | 45 ++
.../selftests/kvm/lib/powerpc/processor.c | 481 ++++++++++++++++++
.../testing/selftests/kvm/lib/powerpc/ucall.c | 22 +
.../selftests/kvm/lib/riscv/processor.c | 4 -
.../selftests/kvm/lib/s390/processor.c | 4 -
.../testing/selftests/kvm/lib/x86/processor.c | 9 +-
21 files changed, 869 insertions(+), 45 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/powerpc/hcall.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/kvm_util_arch.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/processor.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/ucall.h
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/handlers.S
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/hcall.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/processor.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/ucall.c
--
2.39.5
^ permalink raw reply [flat|nested] 5+ messages in thread
* [RFC 1/4] KVM: selftests: Move pgd_created check into virt_pgd_alloc
2026-05-15 16:04 [RFC 0/4] KVM: selftests: add powerpc support Ritesh Harjani (IBM)
@ 2026-05-15 16:04 ` Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 2/4] KVM: selftests: Add aligned guest physical page allocator Ritesh Harjani (IBM)
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Ritesh Harjani (IBM) @ 2026-05-15 16:04 UTC (permalink / raw)
To: kvm
Cc: linuxppc-dev, Madhavan Srinivasan, Harsh Prateek Bora,
Christophe Leroy, Venkat Rao Bagalkote, Nicholas Piggin,
linux-kernel, Ritesh Harjani (IBM)
From: Nicholas Piggin <npiggin@gmail.com>
virt_arch_pgd_alloc all do the same test and set of pgd_created. Move
this into common code.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[Rebased to latest mainline tree]
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
tools/testing/selftests/kvm/include/kvm_util.h | 5 +++++
tools/testing/selftests/kvm/lib/arm64/processor.c | 4 ----
tools/testing/selftests/kvm/lib/loongarch/processor.c | 4 ----
tools/testing/selftests/kvm/lib/riscv/processor.c | 4 ----
tools/testing/selftests/kvm/lib/s390/processor.c | 4 ----
tools/testing/selftests/kvm/lib/x86/processor.c | 9 +++------
6 files changed, 8 insertions(+), 22 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 2ecaaa0e9965..3666a8530f31 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -1197,7 +1197,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm);
static inline void virt_pgd_alloc(struct kvm_vm *vm)
{
+ if (vm->mmu.pgd_created)
+ return;
+
virt_arch_pgd_alloc(vm);
+
+ vm->mmu.pgd_created = true;
}
/*
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 01325bf4d36f..498fbcb0ea16 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -112,13 +112,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
size_t nr_pages = vm_page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size;
- if (vm->mmu.pgd_created)
- return;
-
vm->mmu.pgd = vm_phy_pages_alloc(vm, nr_pages,
KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
- vm->mmu.pgd_created = true;
}
static void _virt_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa,
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
index 64d91fb76522..207055db5f5d 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -51,9 +51,6 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
int i;
gpa_t child, table;
- if (vm->mmu.pgd_created)
- return;
-
child = table = 0;
for (i = 0; i < vm->mmu.pgtable_levels; i++) {
invalid_pgtable[i] = child;
@@ -64,7 +61,6 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
child = table;
}
vm->mmu.pgd = table;
- vm->mmu.pgd_created = true;
}
static int virt_pte_none(u64 *ptep, int level)
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index ded5429f3448..75a5d4c46001 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -66,13 +66,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
size_t nr_pages = vm_page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size;
- if (vm->mmu.pgd_created)
- return;
-
vm->mmu.pgd = vm_phy_pages_alloc(vm, nr_pages,
KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
- vm->mmu.pgd_created = true;
}
void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index a9adb3782b35..342b7c92463e 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -17,16 +17,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
- if (vm->mmu.pgd_created)
- return;
-
gpa = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
memset(addr_gpa2hva(vm, gpa), 0xff, PAGES_PER_REGION * vm->page_size);
vm->mmu.pgd = gpa;
- vm->mmu.pgd_created = true;
}
/*
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index b51467d70f6e..e420afdfbcfb 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -164,12 +164,9 @@ bool kvm_is_tdp_enabled(void)
static void virt_mmu_init(struct kvm_vm *vm, struct kvm_mmu *mmu,
struct pte_masks *pte_masks)
{
- /* If needed, create the top-level page table. */
- if (!mmu->pgd_created) {
- mmu->pgd = vm_alloc_page_table(vm);
- mmu->pgd_created = true;
- mmu->arch.pte_masks = *pte_masks;
- }
+ /* Create the top-level page table. */
+ mmu->pgd = vm_alloc_page_table(vm);
+ mmu->arch.pte_masks = *pte_masks;
TEST_ASSERT(mmu->pgtable_levels == 4 || mmu->pgtable_levels == 5,
"Selftests MMU only supports 4-level and 5-level paging, not %u-level paging",
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [RFC 2/4] KVM: selftests: Add aligned guest physical page allocator
2026-05-15 16:04 [RFC 0/4] KVM: selftests: add powerpc support Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 1/4] KVM: selftests: Move pgd_created check into virt_pgd_alloc Ritesh Harjani (IBM)
@ 2026-05-15 16:04 ` Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 3/4] KVM: PPC: selftests: add support for powerpc Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 4/4] KVM: PPC: selftests: powerpc enable kvm_create_max_vcpus test Ritesh Harjani (IBM)
3 siblings, 0 replies; 5+ messages in thread
From: Ritesh Harjani (IBM) @ 2026-05-15 16:04 UTC (permalink / raw)
To: kvm
Cc: linuxppc-dev, Madhavan Srinivasan, Harsh Prateek Bora,
Christophe Leroy, Venkat Rao Bagalkote, Nicholas Piggin,
linux-kernel, Ritesh Harjani (IBM)
From: Nicholas Piggin <npiggin@gmail.com>
powerpc will require this to allocate MMU tables in guest memory that
are larger than guest base page size.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[Rebased to latest mainline tree]
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
.../testing/selftests/kvm/include/kvm_util.h | 20 +++++++++--
tools/testing/selftests/kvm/lib/kvm_util.c | 33 +++++++++----------
2 files changed, 33 insertions(+), 20 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 3666a8530f31..c515c918c2c9 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -991,8 +991,8 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
const char *exit_reason_str(unsigned int exit_reason);
gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t min_gpa, u32 memslot);
-gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, gpa_t min_gpa,
- u32 memslot, bool protected);
+gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, size_t align,
+ gpa_t min_gpa, u32 memslot, bool protected);
gpa_t vm_alloc_page_table(struct kvm_vm *vm);
static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
@@ -1003,10 +1003,24 @@ static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
* protected memory, as the majority of memory for such VMs is
* protected, i.e. using shared memory is effectively opt-in.
*/
- return __vm_phy_pages_alloc(vm, num, min_gpa, memslot,
+ return __vm_phy_pages_alloc(vm, num, 1, min_gpa, memslot,
vm_arch_has_protected_memory(vm));
}
+static inline gpa_t vm_phy_pages_alloc_align(struct kvm_vm *vm, size_t num,
+ size_t align, gpa_t min_gpa,
+ u32 memslot)
+{
+ /*
+ * By default, allocate memory as protected for VMs that support
+ * protected memory, as the majority of memory for such VMs is
+ * protected, i.e. using shared memory is effectively opt-in.
+ */
+ return __vm_phy_pages_alloc(vm, num, align, min_gpa, memslot,
+ vm_arch_has_protected_memory(vm));
+}
+
+
/*
* ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also
* loads the test binary into guest memory and creates an IRQ chip (x86 only).
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 2a76eca7029d..cdb004c9ba56 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1442,7 +1442,7 @@ static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva,
u64 pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm);
- gpa_t gpa = __vm_phy_pages_alloc(vm, pages,
+ gpa_t gpa = __vm_phy_pages_alloc(vm, pages, 1,
KVM_UTIL_MIN_PFN * vm->page_size,
vm->memslots[type], protected);
@@ -2021,7 +2021,7 @@ const char *exit_reason_str(unsigned int exit_reason)
* and their base address is returned. A TEST_ASSERT failure occurs if
* not enough pages are available at or above min_gpa.
*/
-gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, size_t align,
gpa_t min_gpa, u32 memslot,
bool protected)
{
@@ -2039,23 +2039,22 @@ gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
TEST_ASSERT(!protected || region->protected_phy_pages,
"Region doesn't support protected memory");
- base = pg = min_gpa >> vm->page_shift;
- do {
- for (; pg < base + num; ++pg) {
- if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
- base = pg = sparsebit_next_set(region->unused_phy_pages, pg);
- break;
+ base = min_gpa >> vm->page_shift;
+again:
+ base = (base + align - 1) & ~(align - 1);
+ for (pg = base; pg < base + num; ++pg) {
+ if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
+ base = sparsebit_next_set(region->unused_phy_pages, pg);
+ if (!base) {
+ fprintf(stderr, "No guest physical page available, "
+ "min_gpa: 0x%lx page_size: 0x%x memslot: %u\n",
+ min_gpa, vm->page_size, memslot);
+ fputs("---- vm dump ----\n", stderr);
+ vm_dump(stderr, vm, 2);
+ abort();
}
+ goto again;
}
- } while (pg && pg != base + num);
-
- if (pg == 0) {
- fprintf(stderr, "No guest physical page available, "
- "min_gpa: 0x%lx page_size: 0x%x memslot: %u\n",
- min_gpa, vm->page_size, memslot);
- fputs("---- vm dump ----\n", stderr);
- vm_dump(stderr, vm, 2);
- abort();
}
for (pg = base; pg < base + num; ++pg) {
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [RFC 3/4] KVM: PPC: selftests: add support for powerpc
2026-05-15 16:04 [RFC 0/4] KVM: selftests: add powerpc support Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 1/4] KVM: selftests: Move pgd_created check into virt_pgd_alloc Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 2/4] KVM: selftests: Add aligned guest physical page allocator Ritesh Harjani (IBM)
@ 2026-05-15 16:04 ` Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 4/4] KVM: PPC: selftests: powerpc enable kvm_create_max_vcpus test Ritesh Harjani (IBM)
3 siblings, 0 replies; 5+ messages in thread
From: Ritesh Harjani (IBM) @ 2026-05-15 16:04 UTC (permalink / raw)
To: kvm
Cc: linuxppc-dev, Madhavan Srinivasan, Harsh Prateek Bora,
Christophe Leroy, Venkat Rao Bagalkote, Nicholas Piggin,
linux-kernel, Ritesh Harjani (IBM)
From: Nicholas Piggin <npiggin@gmail.com>
Implement KVM selftests support for powerpc (Book3S-64).
ucalls are implemented with an unsupported PAPR hcall number which will
always cause KVM to exit to userspace.
Virtual memory is implemented for the radix MMU, and only a base page
size is supported (both 4K and 64K).
Guest interrupts are taken in real-mode, so require a page allocated at
gRA 0x0. Interrupt entry is complicated because gVA:gRA is not 1:1
mapped (like the kernel is), so the MMU can not just be switched on and
off.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[Rebased to latest mainline tree]
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
MAINTAINERS | 2 +
tools/testing/selftests/kvm/Makefile | 2 +-
tools/testing/selftests/kvm/Makefile.kvm | 10 +
.../testing/selftests/kvm/include/kvm_util.h | 9 +
.../selftests/kvm/include/powerpc/hcall.h | 17 +
.../kvm/include/powerpc/kvm_util_arch.h | 22 +
.../selftests/kvm/include/powerpc/ppc_asm.h | 32 ++
.../selftests/kvm/include/powerpc/processor.h | 38 ++
.../selftests/kvm/include/powerpc/ucall.h | 21 +
tools/testing/selftests/kvm/lib/guest_modes.c | 20 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 8 +
.../selftests/kvm/lib/powerpc/handlers.S | 93 ++++
.../testing/selftests/kvm/lib/powerpc/hcall.c | 45 ++
.../selftests/kvm/lib/powerpc/processor.c | 481 ++++++++++++++++++
.../testing/selftests/kvm/lib/powerpc/ucall.c | 22 +
15 files changed, 819 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/powerpc/hcall.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/kvm_util_arch.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/processor.h
create mode 100644 tools/testing/selftests/kvm/include/powerpc/ucall.h
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/handlers.S
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/hcall.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/processor.c
create mode 100644 tools/testing/selftests/kvm/lib/powerpc/ucall.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 6aa3fe2ee1bb..9d0a0cb32811 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -14115,6 +14115,8 @@ F: arch/powerpc/include/asm/kvm*
F: arch/powerpc/include/uapi/asm/kvm*
F: arch/powerpc/kernel/kvm*
F: arch/powerpc/kvm/
+F: tools/testing/selftests/kvm/*/powerpc/
+F: tools/testing/selftests/kvm/powerpc/
KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)
M: Anup Patel <anup@brainfault.org>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index f2b223072b62..03d91f00092f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -3,7 +3,7 @@ top_srcdir = ../../../..
include $(top_srcdir)/scripts/subarch.include
ARCH ?= $(SUBARCH)
-ifeq ($(ARCH),$(filter $(ARCH),arm64 s390 riscv x86 x86_64 loongarch))
+ifeq ($(ARCH),$(filter $(ARCH),arm64 s390 riscv x86 x86_64 loongarch powerpc))
# Top-level selftests allows ARCH=x86_64 :-(
ifeq ($(ARCH),x86_64)
override ARCH := x86
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index 9118a5a51b89..825bea7f851d 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -52,6 +52,11 @@ LIBKVM_loongarch += lib/loongarch/processor.c
LIBKVM_loongarch += lib/loongarch/ucall.c
LIBKVM_loongarch += lib/loongarch/exception.S
+LIBKVM_powerpc += lib/powerpc/handlers.S
+LIBKVM_powerpc += lib/powerpc/processor.c
+LIBKVM_powerpc += lib/powerpc/ucall.c
+LIBKVM_powerpc += lib/powerpc/hcall.c
+
# Non-compiled test targets
TEST_PROGS_x86 += x86/nx_huge_pages_test.sh
@@ -239,6 +244,11 @@ TEST_GEN_PROGS_loongarch += memslot_perf_test
TEST_GEN_PROGS_loongarch += set_memory_region_test
TEST_GEN_PROGS_loongarch += steal_time
+TEST_GEN_PROGS_powerpc = $(TEST_GEN_PROGS_COMMON)
+TEST_GEN_PROGS_powerpc += access_tracking_perf_test
+TEST_GEN_PROGS_powerpc += dirty_log_perf_test
+TEST_GEN_PROGS_powerpc += hardware_disable_test
+
SPLIT_TESTS += arch_timer
SPLIT_TESTS += get-reg-list
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index c515c918c2c9..10f03a182c8b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -209,6 +209,9 @@ enum vm_guest_mode {
VM_MODE_P41V48_4K,
VM_MODE_P41V39_4K,
+ VM_MODE_P52V52_4K, /* For powerpc64 */
+ VM_MODE_P52V52_64K,
+
NUM_VM_MODES,
};
@@ -268,6 +271,12 @@ extern enum vm_guest_mode vm_mode_default;
#define MIN_PAGE_SHIFT 12U
#define ptes_per_page(page_size) ((page_size) / 8)
+#elif defined(__powerpc64__)
+
+#define VM_MODE_DEFAULT vm_mode_default
+#define MIN_PAGE_SHIFT 12U
+#define ptes_per_page(page_size) ((page_size) / 8)
+
#endif
#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT)
diff --git a/tools/testing/selftests/kvm/include/powerpc/hcall.h b/tools/testing/selftests/kvm/include/powerpc/hcall.h
new file mode 100644
index 000000000000..4028baa6c5d8
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/hcall.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * powerpc hcall defines
+ */
+#ifndef SELFTEST_KVM_HCALL_H
+#define SELFTEST_KVM_HCALL_H
+
+#include <linux/compiler.h>
+
+/* Ucalls use unimplemented PAPR hcall 0 which exits KVM */
+#define H_UCALL 0
+
+int64_t hcall0(uint64_t token);
+int64_t hcall1(uint64_t token, uint64_t arg1);
+int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2);
+
+#endif
diff --git a/tools/testing/selftests/kvm/include/powerpc/kvm_util_arch.h b/tools/testing/selftests/kvm/include/powerpc/kvm_util_arch.h
new file mode 100644
index 000000000000..5d45c25cd299
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/kvm_util_arch.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_KVM_UTIL_ARCH_H
+#define SELFTEST_KVM_UTIL_ARCH_H
+
+#include <stdint.h>
+
+#include "kvm_util_types.h"
+
+struct kvm_mmu_arch {};
+
+/* Page table fragment cache for guest page tables < page size */
+struct vm_pt_frag_cache {
+ gpa_t page;
+ size_t page_nr_used;
+};
+
+struct kvm_vm_arch {
+ gpa_t prtb; /* process table */
+ struct vm_pt_frag_cache pt_frag_cache[2]; /* 256B and 4KB PT caches */
+};
+
+#endif /* SELFTEST_KVM_UTIL_ARCH_H */
diff --git a/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h b/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
new file mode 100644
index 000000000000..b9df64659792
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/ppc_asm.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * powerpc asm specific defines
+ */
+#ifndef SELFTEST_KVM_PPC_ASM_H
+#define SELFTEST_KVM_PPC_ASM_H
+
+#define STACK_FRAME_MIN_SIZE 112 /* Could be 32 on ELFv2 */
+#define STACK_REDZONE_SIZE 512
+
+#define INT_FRAME_SIZE (STACK_FRAME_MIN_SIZE + STACK_REDZONE_SIZE)
+
+#define SPR_SRR0 0x01a
+#define SPR_SRR1 0x01b
+#define SPR_CFAR 0x01c
+
+#define MSR_SF 0x8000000000000000ULL
+#define MSR_HV 0x1000000000000000ULL
+#define MSR_VEC 0x0000000002000000ULL
+#define MSR_VSX 0x0000000000800000ULL
+#define MSR_EE 0x0000000000008000ULL
+#define MSR_PR 0x0000000000004000ULL
+#define MSR_FP 0x0000000000002000ULL
+#define MSR_ME 0x0000000000001000ULL
+#define MSR_IR 0x0000000000000020ULL
+#define MSR_DR 0x0000000000000010ULL
+#define MSR_RI 0x0000000000000002ULL
+#define MSR_LE 0x0000000000000001ULL
+
+#define LPCR_ILE 0x0000000002000000ULL
+
+#endif
diff --git a/tools/testing/selftests/kvm/include/powerpc/processor.h b/tools/testing/selftests/kvm/include/powerpc/processor.h
new file mode 100644
index 000000000000..cb75b77c33bb
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/processor.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * powerpc processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+#include <linux/compiler.h>
+#include "ppc_asm.h"
+
+extern unsigned char __interrupts_start[];
+extern unsigned char __interrupts_end[];
+
+struct kvm_vm;
+struct kvm_vcpu;
+
+struct ex_regs {
+ uint64_t gprs[32];
+ uint64_t nia;
+ uint64_t msr;
+ uint64_t cfar;
+ uint64_t lr;
+ uint64_t ctr;
+ uint64_t xer;
+ uint32_t cr;
+ uint32_t trap;
+ uint64_t vaddr; /* vaddr of this struct */
+};
+
+void vm_install_exception_handler(struct kvm_vm *vm, int vector,
+ void (*handler)(struct ex_regs *));
+
+static inline void cpu_relax(void)
+{
+ asm volatile("" ::: "memory");
+}
+
+#endif
diff --git a/tools/testing/selftests/kvm/include/powerpc/ucall.h b/tools/testing/selftests/kvm/include/powerpc/ucall.h
new file mode 100644
index 000000000000..e0dbe91e8848
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/powerpc/ucall.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_KVM_UCALL_H
+#define SELFTEST_KVM_UCALL_H
+
+#include "hcall.h"
+
+#define UCALL_EXIT_REASON KVM_EXIT_PAPR_HCALL
+
+#define UCALL_R4_UCALL 0x5715 /* regular ucall, r5 contains ucall pointer */
+#define UCALL_R4_SIMPLE 0x0000 /* simple exit usable by asm with no ucall data */
+
+static inline void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
+{
+}
+
+static inline void ucall_arch_do_ucall(gva_t uc)
+{
+ hcall2(H_UCALL, UCALL_R4_UCALL, (uintptr_t)(uc));
+}
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c
index 7a96c43b5704..439766fad693 100644
--- a/tools/testing/selftests/kvm/lib/guest_modes.c
+++ b/tools/testing/selftests/kvm/lib/guest_modes.c
@@ -4,16 +4,20 @@
*/
#include "guest_modes.h"
-#if defined(__aarch64__) || defined(__riscv)
+#if defined(__aarch64__) || defined(__riscv) || defined(__powerpc64__)
#include "processor.h"
enum vm_guest_mode vm_mode_default;
#endif
+#if defined(__powerpc64__)
+#include <unistd.h>
+#endif
+
struct guest_mode guest_modes[NUM_VM_MODES];
void guest_modes_append_default(void)
{
-#if !defined(__aarch64__) && !defined(__riscv)
+#if !defined(__aarch64__) && !defined(__riscv) && !defined(__powerpc64__)
guest_mode_append(VM_MODE_DEFAULT, true);
#endif
@@ -108,6 +112,18 @@ void guest_modes_append_default(void)
TEST_ASSERT(vm_mode_default != NUM_VM_MODES, "No supported mode!");
}
#endif
+#ifdef __powerpc64__
+ {
+ TEST_REQUIRE(kvm_has_cap(KVM_CAP_PPC_MMU_RADIX));
+ /* Radix guest EA and RA are 52-bit on POWER9 and POWER10 */
+ if (sysconf(_SC_PAGESIZE) == 4096)
+ vm_mode_default = VM_MODE_P52V52_4K;
+ else
+ vm_mode_default = VM_MODE_P52V52_64K;
+ guest_mode_append(VM_MODE_P52V52_4K, true);
+ guest_mode_append(VM_MODE_P52V52_64K, true);
+ }
+#endif
}
void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index cdb004c9ba56..0dc67c1502cf 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -218,6 +218,8 @@ const char *vm_guest_mode_string(u32 i)
[VM_MODE_P41V57_4K] = "PA-bits:41, VA-bits:57, 4K pages",
[VM_MODE_P41V48_4K] = "PA-bits:41, VA-bits:48, 4K pages",
[VM_MODE_P41V39_4K] = "PA-bits:41, VA-bits:39, 4K pages",
+ [VM_MODE_P52V52_4K] = "PA-bits:52, VA-bits:52, 4K pages",
+ [VM_MODE_P52V52_64K] = "PA-bits:52, VA-bits:52, 64K pages",
};
_Static_assert(sizeof(strings)/sizeof(char *) == NUM_VM_MODES,
"Missing new mode strings?");
@@ -254,6 +256,8 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
[VM_MODE_P41V57_4K] = { 41, 57, 0x1000, 12 },
[VM_MODE_P41V48_4K] = { 41, 48, 0x1000, 12 },
[VM_MODE_P41V39_4K] = { 41, 39, 0x1000, 12 },
+ [VM_MODE_P52V52_4K] = { 52, 52, 0x1000, 12 },
+ [VM_MODE_P52V52_64K] = { 52, 52, 0x10000, 16 },
};
_Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
"Missing new mode params?");
@@ -371,6 +375,10 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
case VM_MODE_P41V39_4K:
vm->mmu.pgtable_levels = 3;
break;
+ case VM_MODE_P52V52_4K:
+ case VM_MODE_P52V52_64K:
+ vm->mmu.pgtable_levels = 4;
+ break;
default:
TEST_FAIL("Unknown guest mode: 0x%x", vm->mode);
}
diff --git a/tools/testing/selftests/kvm/lib/powerpc/handlers.S b/tools/testing/selftests/kvm/lib/powerpc/handlers.S
new file mode 100644
index 000000000000..b860f6a520a1
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/handlers.S
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <ppc_asm.h>
+
+.macro INTERRUPT vec
+. = __interrupts_start + \vec
+ std %r0,(0*8)(%r13)
+ std %r3,(3*8)(%r13)
+ mfspr %r0,SPR_CFAR
+ li %r3,\vec
+ b handle_interrupt
+.endm
+
+.balign 0x1000
+.global __interrupts_start
+__interrupts_start:
+INTERRUPT 0x100
+INTERRUPT 0x200
+INTERRUPT 0x300
+INTERRUPT 0x380
+INTERRUPT 0x400
+INTERRUPT 0x480
+INTERRUPT 0x500
+INTERRUPT 0x600
+INTERRUPT 0x700
+INTERRUPT 0x800
+INTERRUPT 0x900
+INTERRUPT 0xa00
+INTERRUPT 0xc00
+INTERRUPT 0xd00
+INTERRUPT 0xf00
+INTERRUPT 0xf20
+INTERRUPT 0xf40
+INTERRUPT 0xf60
+
+virt_handle_interrupt:
+ stdu %r1,-INT_FRAME_SIZE(%r1)
+ mr %r3,%r31
+ bl route_interrupt
+ ld %r4,(32*8)(%r31) /* NIA */
+ ld %r5,(33*8)(%r31) /* MSR */
+ ld %r6,(35*8)(%r31) /* LR */
+ ld %r7,(36*8)(%r31) /* CTR */
+ ld %r8,(37*8)(%r31) /* XER */
+ lwz %r9,(38*8)(%r31) /* CR */
+ mtspr SPR_SRR0,%r4
+ mtspr SPR_SRR1,%r5
+ mtlr %r6
+ mtctr %r7
+ mtxer %r8
+ mtcr %r9
+reg=4
+ ld %r0,(0*8)(%r31)
+ ld %r3,(3*8)(%r31)
+.rept 28
+ ld reg,(reg*8)(%r31)
+ reg=reg+1
+.endr
+ addi %r1,%r1,INT_FRAME_SIZE
+ rfid
+
+virt_handle_interrupt_p:
+ .llong virt_handle_interrupt
+
+handle_interrupt:
+reg=4
+.rept 28
+ std reg,(reg*8)(%r13)
+ reg=reg+1
+.endr
+ mfspr %r4,SPR_SRR0
+ mfspr %r5,SPR_SRR1
+ mflr %r6
+ mfctr %r7
+ mfxer %r8
+ mfcr %r9
+ std %r4,(32*8)(%r13) /* NIA */
+ std %r5,(33*8)(%r13) /* MSR */
+ std %r0,(34*8)(%r13) /* CFAR */
+ std %r6,(35*8)(%r13) /* LR */
+ std %r7,(36*8)(%r13) /* CTR */
+ std %r8,(37*8)(%r13) /* XER */
+ stw %r9,(38*8 + 0)(%r13) /* CR */
+ stw %r3,(38*8 + 4)(%r13) /* TRAP */
+
+ ld %r31,(39*8)(%r13) /* vaddr */
+ ld %r4,virt_handle_interrupt_p - __interrupts_start(0)
+ mtspr SPR_SRR0,%r4
+ /* Reuse SRR1 */
+
+ rfid
+.global __interrupts_end
+__interrupts_end:
diff --git a/tools/testing/selftests/kvm/lib/powerpc/hcall.c b/tools/testing/selftests/kvm/lib/powerpc/hcall.c
new file mode 100644
index 000000000000..23a56aabad42
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/hcall.c
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PAPR (pseries) hcall support.
+ */
+#include "kvm_util.h"
+#include "hcall.h"
+
+int64_t hcall0(uint64_t token)
+{
+ register uintptr_t r3 asm ("r3") = token;
+
+ asm volatile("sc 1" : "+r"(r3) :
+ : "r0", "r4", "r5", "r6", "r7", "r8", "r9",
+ "r10","r11", "r12", "ctr", "xer",
+ "memory");
+
+ return r3;
+}
+
+int64_t hcall1(uint64_t token, uint64_t arg1)
+{
+ register uintptr_t r3 asm ("r3") = token;
+ register uintptr_t r4 asm ("r4") = arg1;
+
+ asm volatile("sc 1" : "+r"(r3), "+r"(r4) :
+ : "r0", "r5", "r6", "r7", "r8", "r9",
+ "r10","r11", "r12", "ctr", "xer",
+ "memory");
+
+ return r3;
+}
+
+int64_t hcall2(uint64_t token, uint64_t arg1, uint64_t arg2)
+{
+ register uintptr_t r3 asm ("r3") = token;
+ register uintptr_t r4 asm ("r4") = arg1;
+ register uintptr_t r5 asm ("r5") = arg2;
+
+ asm volatile("sc 1" : "+r"(r3), "+r"(r4), "+r"(r5) :
+ : "r0", "r6", "r7", "r8", "r9",
+ "r10","r11", "r12", "ctr", "xer",
+ "memory");
+
+ return r3;
+}
diff --git a/tools/testing/selftests/kvm/lib/powerpc/processor.c b/tools/testing/selftests/kvm/lib/powerpc/processor.c
new file mode 100644
index 000000000000..a345844cf941
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/processor.c
@@ -0,0 +1,481 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest powerpc library code - CPU-related functions (page tables...)
+ */
+
+#include <linux/sizes.h>
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "ucall_common.h"
+#include "guest_modes.h"
+#include "hcall.h"
+
+#define RADIX_TREE_SIZE ((0x2UL << 61) | (0x5UL << 5)) /* 52-bits */
+#define RADIX_PGD_INDEX_SIZE 13
+
+static void set_proc_table(struct kvm_vm *vm, int pid, uint64_t dw0, uint64_t dw1)
+{
+ uint64_t *proc_table;
+
+ proc_table = addr_gpa2hva(vm, vm->arch.prtb);
+ proc_table[pid * 2 + 0] = cpu_to_be64(dw0);
+ proc_table[pid * 2 + 1] = cpu_to_be64(dw1);
+}
+
+static void set_radix_proc_table(struct kvm_vm *vm, int pid, gpa_t pgd)
+{
+ set_proc_table(vm, pid, pgd | RADIX_TREE_SIZE | RADIX_PGD_INDEX_SIZE, 0);
+}
+
+void virt_arch_pgd_alloc(struct kvm_vm *vm)
+{
+ struct kvm_ppc_mmuv3_cfg mmu_cfg;
+ gpa_t prtb, pgtb;
+ size_t pgd_pages;
+
+ TEST_ASSERT((vm->mode == VM_MODE_P52V52_4K) ||
+ (vm->mode == VM_MODE_P52V52_64K),
+ "Unsupported guest mode, mode: 0x%x", vm->mode);
+
+ prtb = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ vm->arch.prtb = prtb;
+
+ pgd_pages = (1UL << (RADIX_PGD_INDEX_SIZE + 3)) >> vm->page_shift;
+ if (!pgd_pages)
+ pgd_pages = 1;
+ pgtb = vm_phy_pages_alloc_align(vm, pgd_pages, pgd_pages,
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ vm->mmu.pgd = pgtb;
+
+ /* Set the base page directory in the proc table */
+ set_radix_proc_table(vm, 0, pgtb);
+
+ if (vm->mode == VM_MODE_P52V52_4K)
+ mmu_cfg.process_table = prtb | 0x8000000000000000UL | 0x0; /* 4K size */
+ else /* vm->mode == VM_MODE_P52V52_64K */
+ mmu_cfg.process_table = prtb | 0x8000000000000000UL | 0x4; /* 64K size */
+ mmu_cfg.flags = KVM_PPC_MMUV3_RADIX | KVM_PPC_MMUV3_GTSE;
+
+ vm_ioctl(vm, KVM_PPC_CONFIGURE_V3_MMU, &mmu_cfg);
+}
+
+static int pt_shift(struct kvm_vm *vm, int level)
+{
+ switch (level) {
+ case 1:
+ return 13;
+ case 2:
+ case 3:
+ return 9;
+ case 4:
+ if (vm->mode == VM_MODE_P52V52_4K)
+ return 9;
+ else /* vm->mode == VM_MODE_P52V52_64K */
+ return 5;
+ default:
+ TEST_ASSERT(false, "Invalid page table level %d\n", level);
+ return 0;
+ }
+}
+
+static uint64_t pt_entry_coverage(struct kvm_vm *vm, int level)
+{
+ uint64_t size = vm->page_size;
+
+ if (level == 4)
+ return size;
+ size <<= pt_shift(vm, 4);
+ if (level == 3)
+ return size;
+ size <<= pt_shift(vm, 3);
+ if (level == 2)
+ return size;
+ size <<= pt_shift(vm, 2);
+ return size;
+}
+
+static int pt_idx(struct kvm_vm *vm, uint64_t vaddr, int level, uint64_t *nls)
+{
+ switch (level) {
+ case 1:
+ if (nls)
+ *nls = 0x9;
+ return (vaddr >> 39) & 0x1fff;
+ case 2:
+ if (nls)
+ *nls = 0x9;
+ return (vaddr >> 30) & 0x1ff;
+ case 3:
+ if (vm->mode == VM_MODE_P52V52_4K) {
+ if (nls)
+ *nls = 0x9;
+ } else { /* vm->mode == VM_MODE_P52V52_64K */
+ if (nls)
+ *nls = 0x5;
+ }
+ return (vaddr >> 21) & 0x1ff;
+ case 4:
+ if (vm->mode == VM_MODE_P52V52_4K)
+ return (vaddr >> 12) & 0x1ff;
+ else /* vm->mode == VM_MODE_P52V52_64K */
+ return (vaddr >> 16) & 0x1f;
+ default:
+ TEST_ASSERT(false, "Invalid page table level %d\n", level);
+ return 0;
+ }
+}
+
+static uint64_t *virt_get_pte(struct kvm_vm *vm, gpa_t pt,
+ uint64_t vaddr, int level, uint64_t *nls)
+{
+ int idx = pt_idx(vm, vaddr, level, nls);
+ uint64_t *ptep = addr_gpa2hva(vm, pt + idx * 8);
+
+ return ptep;
+}
+
+#define PTE_VALID 0x8000000000000000ull
+#define PTE_LEAF 0x4000000000000000ull
+#define PTE_REFERENCED 0x0000000000000100ull
+#define PTE_CHANGED 0x0000000000000080ull
+#define PTE_PRIV 0x0000000000000008ull
+#define PTE_READ 0x0000000000000004ull
+#define PTE_RW 0x0000000000000002ull
+#define PTE_EXEC 0x0000000000000001ull
+#define PTE_PAGE_MASK 0x01fffffffffff000ull
+
+#define PDE_VALID PTE_VALID
+#define PDE_NLS 0x0000000000000011ull
+#define PDE_PT_MASK 0x0fffffffffffff00ull
+
+static gpa_t __vm_alloc_pt(struct kvm_vm *vm, uint64_t pt_shift)
+{
+ gpa_t pt;
+
+ if (pt_shift >= vm->page_shift) {
+ size_t pt_pages = 1ULL << (pt_shift - vm->page_shift);
+
+ pt = vm_phy_pages_alloc_align(vm, pt_pages, pt_pages,
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ } else {
+ struct vm_pt_frag_cache *pt_frag_cache;
+
+ if (pt_shift == 8) {
+ pt_frag_cache = &vm->arch.pt_frag_cache[0];
+ } else if (pt_shift == 12) {
+ pt_frag_cache = &vm->arch.pt_frag_cache[1];
+ } else {
+ TEST_ASSERT(0, "Invalid pt_shift:%lu\n", pt_shift);
+ return 0;
+ }
+
+ if (!pt_frag_cache->page) {
+ pt_frag_cache->page = vm_phy_pages_alloc_align(vm, 1, 1,
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ vm->memslots[MEM_REGION_PT]);
+ }
+ pt = pt_frag_cache->page + pt_frag_cache->page_nr_used;
+ pt_frag_cache->page_nr_used += (1 << pt_shift);
+ if (pt_frag_cache->page_nr_used == vm->page_size) {
+ pt_frag_cache->page = 0;
+ pt_frag_cache->page_nr_used = 0;
+ }
+ }
+
+ return pt;
+}
+
+void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
+{
+ gpa_t pt = vm->mmu.pgd;
+ uint64_t *ptep, pte;
+ int level;
+
+ for (level = 1; level <= 3; level++) {
+ uint64_t nls;
+ uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls);
+ uint64_t pde = be64_to_cpu(*pdep);
+
+ if (pde) {
+ TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF),
+ "Invalid PDE at level: %u gva: 0x%lx pde:0x%lx\n",
+ level, gva, pde);
+ pt = pde & PDE_PT_MASK;
+ continue;
+ }
+
+ pt = __vm_alloc_pt(vm, nls + 3);
+ pde = PDE_VALID | nls | pt;
+ *pdep = cpu_to_be64(pde);
+ }
+
+ ptep = virt_get_pte(vm, pt, gva, level, NULL);
+ pte = be64_to_cpu(*ptep);
+
+ TEST_ASSERT(!pte, "PTE already present at level: %u gva: 0x%lx pte:0x%lx\n",
+ level, gva, pte);
+
+ pte = PTE_VALID | PTE_LEAF | PTE_REFERENCED | PTE_CHANGED | PTE_PRIV |
+ PTE_READ | PTE_RW | PTE_EXEC | (gpa & PTE_PAGE_MASK);
+ *ptep = cpu_to_be64(pte);
+}
+
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+{
+ gpa_t pt = vm->mmu.pgd;
+ uint64_t *ptep, pte;
+ int level;
+
+ for (level = 1; level <= 3; level++) {
+ uint64_t nls;
+ uint64_t *pdep = virt_get_pte(vm, pt, gva, level, &nls);
+ uint64_t pde = be64_to_cpu(*pdep);
+
+ TEST_ASSERT((pde & PDE_VALID) && !(pde & PTE_LEAF),
+ "PDE not present at level: %u gva: 0x%lx pde:0x%lx\n",
+ level, gva, pde);
+ pt = pde & PDE_PT_MASK;
+ }
+
+ ptep = virt_get_pte(vm, pt, gva, level, NULL);
+ pte = be64_to_cpu(*ptep);
+
+ TEST_ASSERT(pte,
+ "PTE not present at level: %u gva: 0x%lx pte:0x%lx\n",
+ level, gva, pte);
+
+ TEST_ASSERT((pte & PTE_VALID) && (pte & PTE_LEAF) &&
+ (pte & PTE_READ) && (pte & PTE_RW) && (pte & PTE_EXEC),
+ "PTE not valid at level: %u gva: 0x%lx pte:0x%lx\n",
+ level, gva, pte);
+
+ return (pte & PTE_PAGE_MASK) + (gva & (vm->page_size - 1));
+}
+
+static void virt_dump_pt(FILE *stream, struct kvm_vm *vm, gpa_t pt,
+ gva_t va, int level, uint8_t indent)
+{
+ int size, idx;
+
+ size = 1U << (pt_shift(vm, level) + 3);
+
+ for (idx = 0; idx < size; idx += 8, va += pt_entry_coverage(vm, level)) {
+ uint64_t *page_table = addr_gpa2hva(vm, pt + idx);
+ uint64_t pte = be64_to_cpu(*page_table);
+
+ if (!(pte & PTE_VALID))
+ continue;
+
+ if (pte & PTE_LEAF) {
+ fprintf(stream,
+ "%*s PTE[%d] gVA:0x%016lx -> gRA:0x%016llx\n",
+ indent, "", idx / 8, va, pte & PTE_PAGE_MASK);
+ } else {
+ fprintf(stream, "%*sPDE%d[%d] gVA:0x%016lx\n",
+ indent, "", level, idx / 8, va);
+ virt_dump_pt(stream, vm, pte & PDE_PT_MASK, va,
+ level + 1, indent + 2);
+ }
+ }
+
+}
+
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+ gpa_t pt = vm->mmu.pgd;
+
+ if (!vm->mmu.pgd_created)
+ return;
+
+ virt_dump_pt(stream, vm, pt, 0, 1, indent);
+}
+
+static unsigned long get_r2(void)
+{
+ unsigned long r2;
+
+ asm("mr %0,%%r2" : "=r"(r2));
+
+ return r2;
+}
+
+void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
+{
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vcpu, ®s);
+ regs.pc = (uintptr_t)guest_code;
+ regs.gpr[12] = (uintptr_t)guest_code;
+ vcpu_regs_set(vcpu, ®s);
+}
+
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
+{
+ const size_t stack_size = SZ_64K;
+ gva_t stack_vaddr, ex_regs_vaddr;
+ gpa_t ex_regs_paddr;
+ struct ex_regs *ex_regs;
+ struct kvm_regs regs;
+ struct kvm_vcpu *vcpu;
+ uint64_t lpcr;
+
+ stack_vaddr = __vm_alloc(vm, stack_size,
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
+
+ ex_regs_vaddr = __vm_alloc(vm, stack_size,
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
+ ex_regs_paddr = addr_gva2gpa(vm, ex_regs_vaddr);
+ ex_regs = addr_gpa2hva(vm, ex_regs_paddr);
+ ex_regs->vaddr = ex_regs_vaddr;
+
+ vcpu = __vm_vcpu_add(vm, vcpu_id);
+
+ vcpu_enable_cap(vcpu, KVM_CAP_PPC_PAPR, 1);
+
+ /* Setup guest registers */
+ vcpu_regs_get(vcpu, ®s);
+ lpcr = vcpu_get_reg(vcpu, KVM_REG_PPC_LPCR_64);
+
+ regs.gpr[1] = stack_vaddr + stack_size - 256;
+ regs.gpr[2] = (uintptr_t)get_r2();
+ regs.gpr[13] = (uintptr_t)ex_regs_paddr;
+
+ regs.msr = MSR_SF | MSR_VEC | MSR_VSX | MSR_FP |
+ MSR_ME | MSR_IR | MSR_DR | MSR_RI;
+
+ if (BYTE_ORDER == LITTLE_ENDIAN) {
+ regs.msr |= MSR_LE;
+ lpcr |= LPCR_ILE;
+ } else {
+ lpcr &= ~LPCR_ILE;
+ }
+
+ vcpu_regs_set(vcpu, ®s);
+ vcpu_set_reg(vcpu, KVM_REG_PPC_LPCR_64, lpcr);
+
+ return vcpu;
+}
+
+void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
+{
+ va_list ap;
+ struct kvm_regs regs;
+ int i;
+
+ TEST_ASSERT(num >= 1 && num <= 5, "Unsupported number of args: %u\n",
+ num);
+
+ va_start(ap, num);
+ vcpu_regs_get(vcpu, ®s);
+
+ for (i = 0; i < num; i++)
+ regs.gpr[i + 3] = va_arg(ap, uint64_t);
+
+ vcpu_regs_set(vcpu, ®s);
+ va_end(ap);
+}
+
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+{
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vcpu, ®s);
+
+ fprintf(stream, "%*sNIA: 0x%016llx MSR: 0x%016llx\n",
+ indent, "", regs.pc, regs.msr);
+ fprintf(stream, "%*sLR: 0x%016llx CTR :0x%016llx\n",
+ indent, "", regs.lr, regs.ctr);
+ fprintf(stream, "%*sCR: 0x%08llx XER :0x%016llx\n",
+ indent, "", regs.cr, regs.xer);
+}
+
+void kvm_arch_vm_post_create(struct kvm_vm *vm, unsigned int nr_vcpus)
+{
+ gpa_t excp_paddr;
+ void *mem;
+
+ excp_paddr = vm_phy_page_alloc(vm, 0, vm->memslots[MEM_REGION_DATA]);
+
+ TEST_ASSERT(excp_paddr == 0,
+ "Interrupt vectors not allocated at gPA address 0: (0x%lx)",
+ excp_paddr);
+
+ mem = addr_gpa2hva(vm, excp_paddr);
+ memcpy(mem, __interrupts_start, __interrupts_end - __interrupts_start);
+}
+
+void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
+{
+ struct ucall uc;
+
+ if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED) {
+ gpa_t ex_regs_paddr;
+ struct ex_regs *ex_regs;
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vcpu, ®s);
+ ex_regs_paddr = (gpa_t)regs.gpr[13];
+ ex_regs = addr_gpa2hva(vcpu->vm, ex_regs_paddr);
+
+ TEST_FAIL("Unexpected interrupt in guest NIA:0x%016lx MSR:0x%016lx TRAP:0x%04x",
+ ex_regs->nia, ex_regs->msr, ex_regs->trap);
+ }
+}
+
+struct handler {
+ void (*fn)(struct ex_regs *regs);
+ int trap;
+};
+
+#define NR_HANDLERS 10
+static struct handler handlers[NR_HANDLERS];
+
+void route_interrupt(struct ex_regs *regs)
+{
+ int i;
+
+ for (i = 0; i < NR_HANDLERS; i++) {
+ if (handlers[i].trap == regs->trap) {
+ handlers[i].fn(regs);
+ return;
+ }
+ }
+
+ ucall(UCALL_UNHANDLED, 0);
+}
+
+void vm_install_exception_handler(struct kvm_vm *vm, int trap,
+ void (*fn)(struct ex_regs *))
+{
+ int i;
+
+ for (i = 0; i < NR_HANDLERS; i++) {
+ if (!handlers[i].trap || handlers[i].trap == trap) {
+ if (fn == NULL)
+ trap = 0; /* Clear handler */
+ handlers[i].trap = trap;
+ handlers[i].fn = fn;
+ sync_global_to_guest(vm, handlers[i]);
+ return;
+ }
+ }
+
+ TEST_FAIL("Out of exception handlers");
+}
+
+void kvm_selftest_arch_init(void)
+{
+ TEST_REQUIRE(kvm_has_cap(KVM_CAP_PPC_MMU_RADIX));
+
+ /*
+ * powerpc default mode is set by host page size and not static,
+ * so start by computing that early.
+ */
+ guest_modes_append_default();
+}
diff --git a/tools/testing/selftests/kvm/lib/powerpc/ucall.c b/tools/testing/selftests/kvm/lib/powerpc/ucall.c
new file mode 100644
index 000000000000..3481a7a0b850
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/powerpc/ucall.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ucall support. A ucall is a "hypercall to host userspace".
+ */
+#include "kvm_util.h"
+#include "ucall_common.h"
+#include "hcall.h"
+
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
+{
+ struct kvm_run *run = vcpu->run;
+
+ if (run->exit_reason == UCALL_EXIT_REASON &&
+ run->papr_hcall.nr == H_UCALL) {
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vcpu, ®s);
+ if (regs.gpr[4] == UCALL_R4_UCALL)
+ return (void *)regs.gpr[5];
+ }
+ return NULL;
+}
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [RFC 4/4] KVM: PPC: selftests: powerpc enable kvm_create_max_vcpus test
2026-05-15 16:04 [RFC 0/4] KVM: selftests: add powerpc support Ritesh Harjani (IBM)
` (2 preceding siblings ...)
2026-05-15 16:04 ` [RFC 3/4] KVM: PPC: selftests: add support for powerpc Ritesh Harjani (IBM)
@ 2026-05-15 16:04 ` Ritesh Harjani (IBM)
3 siblings, 0 replies; 5+ messages in thread
From: Ritesh Harjani (IBM) @ 2026-05-15 16:04 UTC (permalink / raw)
To: kvm
Cc: linuxppc-dev, Madhavan Srinivasan, Harsh Prateek Bora,
Christophe Leroy, Venkat Rao Bagalkote, Nicholas Piggin,
linux-kernel, Ritesh Harjani (IBM)
From: Nicholas Piggin <npiggin@gmail.com>
powerpc's maximum permitted vCPU ID depends on the VM's SMT mode, and
the maximum reported by KVM_CAP_MAX_VCPU_ID exceeds a simple non-SMT
VM's limit.
The powerpc KVM selftest port uses non-SMT VMs, so add a workaround
to the kvm_create_max_vcpus test case to limit vCPU IDs to
KVM_CAP_MAX_VCPUS on powerpc.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[Rebased to laest mainline tree]
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
tools/testing/selftests/kvm/kvm_create_max_vcpus.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
index c5310736ed06..a82c13d6cdf5 100644
--- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
+++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
@@ -56,6 +56,15 @@ int main(int argc, char *argv[])
"KVM_MAX_VCPU_IDS (%d) must be at least as large as KVM_MAX_VCPUS (%d).",
kvm_max_vcpu_id, kvm_max_vcpus);
+#ifdef __powerpc64__
+ /*
+ * powerpc has a particular format for the vcpu ID that depends on
+ * the guest SMT mode, and the max ID cap is too large for non-SMT
+ * modes, where the maximum ID is the same as the maximum vCPUs.
+ */
+ kvm_max_vcpu_id = kvm_max_vcpus;
+#endif
+
test_vcpu_creation(0, kvm_max_vcpus);
if (kvm_max_vcpu_id > kvm_max_vcpus)
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-05-15 16:05 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-15 16:04 [RFC 0/4] KVM: selftests: add powerpc support Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 1/4] KVM: selftests: Move pgd_created check into virt_pgd_alloc Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 2/4] KVM: selftests: Add aligned guest physical page allocator Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 3/4] KVM: PPC: selftests: add support for powerpc Ritesh Harjani (IBM)
2026-05-15 16:04 ` [RFC 4/4] KVM: PPC: selftests: powerpc enable kvm_create_max_vcpus test Ritesh Harjani (IBM)
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.