* [PATCH v4 0/3] KVM ARM64 pre_fault_memory
@ 2026-01-13 15:26 Jack Thomson
2026-01-13 15:26 ` [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation Jack Thomson
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Jack Thomson @ 2026-01-13 15:26 UTC (permalink / raw)
To: maz, oliver.upton, pbonzini
Cc: joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, will,
shuah, linux-arm-kernel, kvmarm, linux-kernel, linux-kselftest,
isaku.yamahata, xmarcalx, kalyazin, jackabt
From: Jack Thomson <jackabt@amazon.com>
This patch series adds ARM64 support for the KVM_PRE_FAULT_MEMORY
feature, which was previously only available on x86 [1]. This allows us
to reduce the number of stage-2 faults during execution. This is of
benefit in post-copy migration scenarios, particularly in memory
intensive applications, where we are experiencing high latencies due to
the stage-2 faults.
Patch Overview:
- The first patch adds support for the KVM_PRE_FAULT_MEMORY ioctl
on arm64.
- The second patch updates the pre_fault_memory_test to support
arm64.
- The last patch extends the pre_fault_memory_test to cover
different vm memory backings.
With regards to the additional parameter to `user_mem_abort`, noted in
the v3 review, would you like this to be fixed in this series or would
a follow-up series be ok? I also found a series from Sean which looks
to address this [2].
=== Changes Since v3 [3] ===
- Updated to now return -EOPNOTSUPP for pKVM. Previously this was not
checked.
- When running a nested guest, properly resolve the L2 IPA to L1 IPA
before pre faulting.
- Refactoring, page_size is now unsigned and ordered definitions at
top of pre_fault function.
Thanks Marc for your review
=== Changes Since v2 [4] ===
- Update fault info synthesize value. Thanks Suzuki
- Remove change to selftests for unaligned mmap allocations. Thanks
Sean
[1]: https://lore.kernel.org/kvm/20240710174031.312055-1-pbonzini@redhat.com
[2]: https://lore.kernel.org/linux-arm-kernel/20250821210042.3451147-1-seanjc@google.com/
[3]: https://lore.kernel.org/linux-arm-kernel/20251119154910.97716-1-jackabt.amazon@gmail.com
[4]: https://lore.kernel.org/linux-arm-kernel/20251013151502.6679-1-jackabt.amazon@gmail.com
Jack Thomson (3):
KVM: arm64: Add pre_fault_memory implementation
KVM: selftests: Enable pre_fault_memory_test for arm64
KVM: selftests: Add option for different backing in pre-fault tests
Documentation/virt/kvm/api.rst | 3 +-
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/arm.c | 1 +
arch/arm64/kvm/mmu.c | 79 +++++++++++-
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../selftests/kvm/pre_fault_memory_test.c | 115 ++++++++++++++----
6 files changed, 169 insertions(+), 31 deletions(-)
base-commit: 3611ca7c12b740e250d83f8bbe3554b740c503b0
--
2.43.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation
2026-01-13 15:26 [PATCH v4 0/3] KVM ARM64 pre_fault_memory Jack Thomson
@ 2026-01-13 15:26 ` Jack Thomson
2026-01-15 9:51 ` Marc Zyngier
2026-01-13 15:26 ` [PATCH v4 2/3] KVM: selftests: Enable pre_fault_memory_test for arm64 Jack Thomson
2026-01-13 15:26 ` [PATCH v4 3/3] KVM: selftests: Add option for different backing in pre-fault tests Jack Thomson
2 siblings, 1 reply; 8+ messages in thread
From: Jack Thomson @ 2026-01-13 15:26 UTC (permalink / raw)
To: maz, oliver.upton, pbonzini
Cc: joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, will,
shuah, linux-arm-kernel, kvmarm, linux-kernel, linux-kselftest,
isaku.yamahata, xmarcalx, kalyazin, jackabt
From: Jack Thomson <jackabt@amazon.com>
Add kvm_arch_vcpu_pre_fault_memory() for arm64. The implementation hands
off the stage-2 faulting logic to either gmem_abort() or
user_mem_abort().
Add an optional page_size output parameter to user_mem_abort() to
return the VMA page size, which is needed when pre-faulting.
Update the documentation to clarify x86 specific behaviour.
Signed-off-by: Jack Thomson <jackabt@amazon.com>
---
Documentation/virt/kvm/api.rst | 3 +-
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/arm.c | 1 +
arch/arm64/kvm/mmu.c | 79 ++++++++++++++++++++++++++++++++--
4 files changed, 79 insertions(+), 5 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 01a3abef8abb..44cfd9e736bb 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6493,7 +6493,8 @@ Errors:
KVM_PRE_FAULT_MEMORY populates KVM's stage-2 page tables used to map memory
for the current vCPU state. KVM maps memory as if the vCPU generated a
stage-2 read page fault, e.g. faults in memory as needed, but doesn't break
-CoW. However, KVM does not mark any newly created stage-2 PTE as Accessed.
+CoW. However, on x86, KVM does not mark any newly created stage-2 PTE as
+Accessed.
In the case of confidential VM types where there is an initial set up of
private guest memory before the guest is 'finalized'/measured, this ioctl
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 4f803fd1c99a..6872aaabe16c 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -25,6 +25,7 @@ menuconfig KVM
select HAVE_KVM_CPU_RELAX_INTERCEPT
select KVM_MMIO
select KVM_GENERIC_DIRTYLOG_READ_PROTECT
+ select KVM_GENERIC_PRE_FAULT_MEMORY
select VIRT_XFER_TO_GUEST_WORK
select KVM_VFIO
select HAVE_KVM_DIRTY_RING_ACQ_REL
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 4f80da0c0d1d..19bac68f737f 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -332,6 +332,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_COUNTER_OFFSET:
case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
case KVM_CAP_ARM_SEA_TO_USER:
+ case KVM_CAP_PRE_FAULT_MEMORY:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 48d7c372a4cd..499b131f794e 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1642,8 +1642,8 @@ static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_s2_trans *nested,
- struct kvm_memory_slot *memslot, unsigned long hva,
- bool fault_is_perm)
+ struct kvm_memory_slot *memslot, unsigned long *page_size,
+ unsigned long hva, bool fault_is_perm)
{
int ret = 0;
bool topup_memcache;
@@ -1923,6 +1923,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_release_faultin_page(kvm, page, !!ret, writable);
kvm_fault_unlock(kvm);
+ if (page_size)
+ *page_size = vma_pagesize;
+
/* Mark the page dirty only if the fault is handled successfully */
if (writable && !ret)
mark_page_dirty_in_slot(kvm, memslot, gfn);
@@ -2196,8 +2199,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
ret = gmem_abort(vcpu, fault_ipa, nested, memslot,
esr_fsc_is_permission_fault(esr));
else
- ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
- esr_fsc_is_permission_fault(esr));
+ ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, NULL,
+ hva, esr_fsc_is_permission_fault(esr));
if (ret == 0)
ret = 1;
out:
@@ -2573,3 +2576,71 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
trace_kvm_toggle_cache(*vcpu_pc(vcpu), was_enabled, now_enabled);
}
+
+long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
+ struct kvm_pre_fault_memory *range)
+{
+ struct kvm_vcpu_fault_info *fault_info = &vcpu->arch.fault;
+ struct kvm_s2_trans nested_trans, *nested = NULL;
+ unsigned long page_size = PAGE_SIZE;
+ struct kvm_memory_slot *memslot;
+ phys_addr_t ipa = range->gpa;
+ phys_addr_t end;
+ hva_t hva;
+ gfn_t gfn;
+ int ret;
+
+ if (vcpu_is_protected(vcpu))
+ return -EOPNOTSUPP;
+
+ /*
+ * We may prefault on a shadow stage 2 page table if we are
+ * running a nested guest. In this case, we have to resolve the L2
+ * IPA to the L1 IPA first, before knowing what kind of memory should
+ * back the L1 IPA.
+ *
+ * If the shadow stage 2 page table walk faults, then we return
+ * -EFAULT
+ */
+ if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
+ vcpu->arch.hw_mmu->nested_stage2_enabled) {
+ ret = kvm_walk_nested_s2(vcpu, ipa, &nested_trans);
+ if (ret)
+ return -EFAULT;
+
+ ipa = kvm_s2_trans_output(&nested_trans);
+ nested = &nested_trans;
+ }
+
+ if (ipa >= kvm_phys_size(vcpu->arch.hw_mmu))
+ return -ENOENT;
+
+ /* Generate a synthetic abort for the pre-fault address */
+ fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
+ ESR_ELx_FSC_FAULT_L(KVM_PGTABLE_LAST_LEVEL);
+ fault_info->hpfar_el2 = HPFAR_EL2_NS |
+ FIELD_PREP(HPFAR_EL2_FIPA, ipa >> 12);
+
+ gfn = gpa_to_gfn(ipa);
+ memslot = gfn_to_memslot(vcpu->kvm, gfn);
+ if (!memslot)
+ return -ENOENT;
+
+ if (kvm_slot_has_gmem(memslot)) {
+ /* gmem currently only supports PAGE_SIZE mappings */
+ ret = gmem_abort(vcpu, ipa, nested, memslot, false);
+ } else {
+ hva = gfn_to_hva_memslot_prot(memslot, gfn, NULL);
+ if (kvm_is_error_hva(hva))
+ return -EFAULT;
+
+ ret = user_mem_abort(vcpu, ipa, nested, memslot, &page_size, hva,
+ false);
+ }
+
+ if (ret < 0)
+ return ret;
+
+ end = ALIGN_DOWN(range->gpa, page_size) + page_size;
+ return min(range->size, end - range->gpa);
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 2/3] KVM: selftests: Enable pre_fault_memory_test for arm64
2026-01-13 15:26 [PATCH v4 0/3] KVM ARM64 pre_fault_memory Jack Thomson
2026-01-13 15:26 ` [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation Jack Thomson
@ 2026-01-13 15:26 ` Jack Thomson
2026-01-13 15:26 ` [PATCH v4 3/3] KVM: selftests: Add option for different backing in pre-fault tests Jack Thomson
2 siblings, 0 replies; 8+ messages in thread
From: Jack Thomson @ 2026-01-13 15:26 UTC (permalink / raw)
To: maz, oliver.upton, pbonzini
Cc: joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, will,
shuah, linux-arm-kernel, kvmarm, linux-kernel, linux-kselftest,
isaku.yamahata, xmarcalx, kalyazin, jackabt
From: Jack Thomson <jackabt@amazon.com>
Enable the pre_fault_memory_test to run on arm64 by making it work with
different guest page sizes and testing multiple guest configurations.
Update the test_assert to compare against the UCALL_EXIT_REASON, for
portability, as arm64 exits with KVM_EXIT_MMIO while x86 uses
KVM_EXIT_IO.
Signed-off-by: Jack Thomson <jackabt@amazon.com>
---
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../selftests/kvm/pre_fault_memory_test.c | 85 ++++++++++++++-----
2 files changed, 63 insertions(+), 23 deletions(-)
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index ba5c2b643efa..6d6a74ddad30 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -187,6 +187,7 @@ TEST_GEN_PROGS_arm64 += memslot_perf_test
TEST_GEN_PROGS_arm64 += mmu_stress_test
TEST_GEN_PROGS_arm64 += rseq_test
TEST_GEN_PROGS_arm64 += steal_time
+TEST_GEN_PROGS_arm64 += pre_fault_memory_test
TEST_GEN_PROGS_s390 = $(TEST_GEN_PROGS_COMMON)
TEST_GEN_PROGS_s390 += s390/memop
diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/testing/selftests/kvm/pre_fault_memory_test.c
index 93e603d91311..be1a84a6c137 100644
--- a/tools/testing/selftests/kvm/pre_fault_memory_test.c
+++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c
@@ -11,19 +11,29 @@
#include <kvm_util.h>
#include <processor.h>
#include <pthread.h>
+#include <guest_modes.h>
/* Arbitrarily chosen values */
-#define TEST_SIZE (SZ_2M + PAGE_SIZE)
-#define TEST_NPAGES (TEST_SIZE / PAGE_SIZE)
+#define TEST_BASE_SIZE SZ_2M
#define TEST_SLOT 10
-static void guest_code(uint64_t base_gva)
+/* Storage of test info to share with guest code */
+struct test_config {
+ uint64_t page_size;
+ uint64_t test_size;
+ uint64_t test_num_pages;
+};
+
+static struct test_config test_config;
+
+static void guest_code(uint64_t base_gpa)
{
volatile uint64_t val __used;
+ struct test_config *config = &test_config;
int i;
- for (i = 0; i < TEST_NPAGES; i++) {
- uint64_t *src = (uint64_t *)(base_gva + i * PAGE_SIZE);
+ for (i = 0; i < config->test_num_pages; i++) {
+ uint64_t *src = (uint64_t *)(base_gpa + i * config->page_size);
val = *src;
}
@@ -56,7 +66,7 @@ static void *delete_slot_worker(void *__data)
cpu_relax();
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, data->gpa,
- TEST_SLOT, TEST_NPAGES, data->flags);
+ TEST_SLOT, test_config.test_num_pages, data->flags);
return NULL;
}
@@ -159,22 +169,35 @@ static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 base_gpa, u64 offset,
KVM_PRE_FAULT_MEMORY, ret, vcpu->vm);
}
-static void __test_pre_fault_memory(unsigned long vm_type, bool private)
+struct test_params {
+ unsigned long vm_type;
+ bool private;
+};
+
+static void __test_pre_fault_memory(enum vm_guest_mode guest_mode, void *arg)
{
uint64_t gpa, gva, alignment, guest_page_size;
+ struct test_params *p = arg;
const struct vm_shape shape = {
- .mode = VM_MODE_DEFAULT,
- .type = vm_type,
+ .mode = guest_mode,
+ .type = p->vm_type,
};
struct kvm_vcpu *vcpu;
struct kvm_run *run;
struct kvm_vm *vm;
struct ucall uc;
+ pr_info("Testing guest mode: %s\n", vm_guest_mode_string(guest_mode));
+
vm = vm_create_shape_with_one_vcpu(shape, &vcpu, guest_code);
- alignment = guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size;
- gpa = (vm->max_gfn - TEST_NPAGES) * guest_page_size;
+ guest_page_size = vm_guest_mode_params[guest_mode].page_size;
+
+ test_config.page_size = guest_page_size;
+ test_config.test_size = TEST_BASE_SIZE + test_config.page_size;
+ test_config.test_num_pages = vm_calc_num_guest_pages(vm->mode, test_config.test_size);
+
+ gpa = (vm->max_gfn - test_config.test_num_pages) * test_config.page_size;
#ifdef __s390x__
alignment = max(0x100000UL, guest_page_size);
#else
@@ -183,23 +206,32 @@ static void __test_pre_fault_memory(unsigned long vm_type, bool private)
gpa = align_down(gpa, alignment);
gva = gpa & ((1ULL << (vm->va_bits - 1)) - 1);
- vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa, TEST_SLOT,
- TEST_NPAGES, private ? KVM_MEM_GUEST_MEMFD : 0);
- virt_map(vm, gva, gpa, TEST_NPAGES);
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+ gpa, TEST_SLOT, test_config.test_num_pages,
+ p->private ? KVM_MEM_GUEST_MEMFD : 0);
+ virt_map(vm, gva, gpa, test_config.test_num_pages);
+
+ if (p->private)
+ vm_mem_set_private(vm, gpa, test_config.test_size);
+ pre_fault_memory(vcpu, gpa, 0, TEST_BASE_SIZE, 0, p->private);
+ /* Test pre-faulting over an already faulted range */
+ pre_fault_memory(vcpu, gpa, 0, TEST_BASE_SIZE, 0, p->private);
+ pre_fault_memory(vcpu, gpa, TEST_BASE_SIZE,
+ test_config.page_size * 2, test_config.page_size, p->private);
+ pre_fault_memory(vcpu, gpa, test_config.test_size,
+ test_config.page_size, test_config.page_size, p->private);
- if (private)
- vm_mem_set_private(vm, gpa, TEST_SIZE);
+ vcpu_args_set(vcpu, 1, gva);
- pre_fault_memory(vcpu, gpa, 0, SZ_2M, 0, private);
- pre_fault_memory(vcpu, gpa, SZ_2M, PAGE_SIZE * 2, PAGE_SIZE, private);
- pre_fault_memory(vcpu, gpa, TEST_SIZE, PAGE_SIZE, PAGE_SIZE, private);
+ /* Export the shared variables to the guest. */
+ sync_global_to_guest(vm, test_config);
- vcpu_args_set(vcpu, 1, gva);
vcpu_run(vcpu);
run = vcpu->run;
- TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
- "Wanted KVM_EXIT_IO, got exit reason: %u (%s)",
+ TEST_ASSERT(run->exit_reason == UCALL_EXIT_REASON,
+ "Wanted %s, got exit reason: %u (%s)",
+ exit_reason_str(UCALL_EXIT_REASON),
run->exit_reason, exit_reason_str(run->exit_reason));
switch (get_ucall(vcpu, &uc)) {
@@ -218,18 +250,25 @@ static void __test_pre_fault_memory(unsigned long vm_type, bool private)
static void test_pre_fault_memory(unsigned long vm_type, bool private)
{
+ struct test_params p = {
+ .vm_type = vm_type,
+ .private = private,
+ };
+
if (vm_type && !(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) {
pr_info("Skipping tests for vm_type 0x%lx\n", vm_type);
return;
}
- __test_pre_fault_memory(vm_type, private);
+ for_each_guest_mode(__test_pre_fault_memory, &p);
}
int main(int argc, char *argv[])
{
TEST_REQUIRE(kvm_check_cap(KVM_CAP_PRE_FAULT_MEMORY));
+ guest_modes_append_default();
+
test_pre_fault_memory(0, false);
#ifdef __x86_64__
test_pre_fault_memory(KVM_X86_SW_PROTECTED_VM, false);
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 3/3] KVM: selftests: Add option for different backing in pre-fault tests
2026-01-13 15:26 [PATCH v4 0/3] KVM ARM64 pre_fault_memory Jack Thomson
2026-01-13 15:26 ` [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation Jack Thomson
2026-01-13 15:26 ` [PATCH v4 2/3] KVM: selftests: Enable pre_fault_memory_test for arm64 Jack Thomson
@ 2026-01-13 15:26 ` Jack Thomson
2 siblings, 0 replies; 8+ messages in thread
From: Jack Thomson @ 2026-01-13 15:26 UTC (permalink / raw)
To: maz, oliver.upton, pbonzini
Cc: joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, will,
shuah, linux-arm-kernel, kvmarm, linux-kernel, linux-kselftest,
isaku.yamahata, xmarcalx, kalyazin, jackabt
From: Jack Thomson <jackabt@amazon.com>
Add a -m option to specify different memory backing types for the
pre-fault tests (e.g., anonymous, hugetlb), allowing testing of the
pre-fault functionality across different memory configurations.
Signed-off-by: Jack Thomson <jackabt@amazon.com>
---
.../selftests/kvm/pre_fault_memory_test.c | 42 +++++++++++++++----
1 file changed, 33 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/testing/selftests/kvm/pre_fault_memory_test.c
index be1a84a6c137..1a177f89bc43 100644
--- a/tools/testing/selftests/kvm/pre_fault_memory_test.c
+++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c
@@ -172,6 +172,7 @@ static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 base_gpa, u64 offset,
struct test_params {
unsigned long vm_type;
bool private;
+ enum vm_mem_backing_src_type mem_backing_src;
};
static void __test_pre_fault_memory(enum vm_guest_mode guest_mode, void *arg)
@@ -187,14 +188,19 @@ static void __test_pre_fault_memory(enum vm_guest_mode guest_mode, void *arg)
struct kvm_vm *vm;
struct ucall uc;
+ size_t backing_src_pagesz = get_backing_src_pagesz(p->mem_backing_src);
+
pr_info("Testing guest mode: %s\n", vm_guest_mode_string(guest_mode));
+ pr_info("Testing memory backing src type: %s\n",
+ vm_mem_backing_src_alias(p->mem_backing_src)->name);
vm = vm_create_shape_with_one_vcpu(shape, &vcpu, guest_code);
guest_page_size = vm_guest_mode_params[guest_mode].page_size;
test_config.page_size = guest_page_size;
- test_config.test_size = TEST_BASE_SIZE + test_config.page_size;
+ test_config.test_size = align_up(TEST_BASE_SIZE + test_config.page_size,
+ backing_src_pagesz);
test_config.test_num_pages = vm_calc_num_guest_pages(vm->mode, test_config.test_size);
gpa = (vm->max_gfn - test_config.test_num_pages) * test_config.page_size;
@@ -203,20 +209,23 @@ static void __test_pre_fault_memory(enum vm_guest_mode guest_mode, void *arg)
#else
alignment = SZ_2M;
#endif
+ alignment = max(alignment, backing_src_pagesz);
gpa = align_down(gpa, alignment);
gva = gpa & ((1ULL << (vm->va_bits - 1)) - 1);
- vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+ vm_userspace_mem_region_add(vm, p->mem_backing_src,
gpa, TEST_SLOT, test_config.test_num_pages,
p->private ? KVM_MEM_GUEST_MEMFD : 0);
virt_map(vm, gva, gpa, test_config.test_num_pages);
if (p->private)
vm_mem_set_private(vm, gpa, test_config.test_size);
- pre_fault_memory(vcpu, gpa, 0, TEST_BASE_SIZE, 0, p->private);
+
+ pre_fault_memory(vcpu, gpa, 0, test_config.test_size, 0, p->private);
/* Test pre-faulting over an already faulted range */
- pre_fault_memory(vcpu, gpa, 0, TEST_BASE_SIZE, 0, p->private);
- pre_fault_memory(vcpu, gpa, TEST_BASE_SIZE,
+ pre_fault_memory(vcpu, gpa, 0, test_config.test_size, 0, p->private);
+ pre_fault_memory(vcpu, gpa,
+ test_config.test_size - test_config.page_size,
test_config.page_size * 2, test_config.page_size, p->private);
pre_fault_memory(vcpu, gpa, test_config.test_size,
test_config.page_size, test_config.page_size, p->private);
@@ -248,11 +257,13 @@ static void __test_pre_fault_memory(enum vm_guest_mode guest_mode, void *arg)
kvm_vm_free(vm);
}
-static void test_pre_fault_memory(unsigned long vm_type, bool private)
+static void test_pre_fault_memory(unsigned long vm_type, enum vm_mem_backing_src_type backing_src,
+ bool private)
{
struct test_params p = {
.vm_type = vm_type,
.private = private,
+ .mem_backing_src = backing_src,
};
if (vm_type && !(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) {
@@ -265,14 +276,27 @@ static void test_pre_fault_memory(unsigned long vm_type, bool private)
int main(int argc, char *argv[])
{
+ enum vm_mem_backing_src_type backing = VM_MEM_SRC_ANONYMOUS;
+ int opt;
+
TEST_REQUIRE(kvm_check_cap(KVM_CAP_PRE_FAULT_MEMORY));
guest_modes_append_default();
- test_pre_fault_memory(0, false);
+ while ((opt = getopt(argc, argv, "m:")) != -1) {
+ switch (opt) {
+ case 'm':
+ backing = parse_backing_src_type(optarg);
+ break;
+ default:
+ break;
+ }
+ }
+
+ test_pre_fault_memory(0, backing, false);
#ifdef __x86_64__
- test_pre_fault_memory(KVM_X86_SW_PROTECTED_VM, false);
- test_pre_fault_memory(KVM_X86_SW_PROTECTED_VM, true);
+ test_pre_fault_memory(KVM_X86_SW_PROTECTED_VM, backing, false);
+ test_pre_fault_memory(KVM_X86_SW_PROTECTED_VM, backing, true);
#endif
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation
2026-01-13 15:26 ` [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation Jack Thomson
@ 2026-01-15 9:51 ` Marc Zyngier
2026-01-16 14:33 ` Thomson, Jack
0 siblings, 1 reply; 8+ messages in thread
From: Marc Zyngier @ 2026-01-15 9:51 UTC (permalink / raw)
To: Jack Thomson
Cc: oliver.upton, pbonzini, joey.gouly, suzuki.poulose, yuzenghui,
catalin.marinas, will, shuah, linux-arm-kernel, kvmarm,
linux-kernel, linux-kselftest, isaku.yamahata, xmarcalx, kalyazin,
jackabt, Vladimir Murzin
[+ Vladimir, who was also looking at this patch]
On Tue, 13 Jan 2026 15:26:40 +0000,
Jack Thomson <jackabt.amazon@gmail.com> wrote:
>
> From: Jack Thomson <jackabt@amazon.com>
>
> Add kvm_arch_vcpu_pre_fault_memory() for arm64. The implementation hands
> off the stage-2 faulting logic to either gmem_abort() or
> user_mem_abort().
>
> Add an optional page_size output parameter to user_mem_abort() to
> return the VMA page size, which is needed when pre-faulting.
>
> Update the documentation to clarify x86 specific behaviour.
>
> Signed-off-by: Jack Thomson <jackabt@amazon.com>
> ---
> Documentation/virt/kvm/api.rst | 3 +-
> arch/arm64/kvm/Kconfig | 1 +
> arch/arm64/kvm/arm.c | 1 +
> arch/arm64/kvm/mmu.c | 79 ++++++++++++++++++++++++++++++++--
> 4 files changed, 79 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 01a3abef8abb..44cfd9e736bb 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6493,7 +6493,8 @@ Errors:
> KVM_PRE_FAULT_MEMORY populates KVM's stage-2 page tables used to map memory
> for the current vCPU state. KVM maps memory as if the vCPU generated a
> stage-2 read page fault, e.g. faults in memory as needed, but doesn't break
> -CoW. However, KVM does not mark any newly created stage-2 PTE as Accessed.
> +CoW. However, on x86, KVM does not mark any newly created stage-2 PTE as
> +Accessed.
>
> In the case of confidential VM types where there is an initial set up of
> private guest memory before the guest is 'finalized'/measured, this ioctl
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index 4f803fd1c99a..6872aaabe16c 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -25,6 +25,7 @@ menuconfig KVM
> select HAVE_KVM_CPU_RELAX_INTERCEPT
> select KVM_MMIO
> select KVM_GENERIC_DIRTYLOG_READ_PROTECT
> + select KVM_GENERIC_PRE_FAULT_MEMORY
> select VIRT_XFER_TO_GUEST_WORK
> select KVM_VFIO
> select HAVE_KVM_DIRTY_RING_ACQ_REL
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 4f80da0c0d1d..19bac68f737f 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -332,6 +332,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> case KVM_CAP_COUNTER_OFFSET:
> case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
> case KVM_CAP_ARM_SEA_TO_USER:
> + case KVM_CAP_PRE_FAULT_MEMORY:
> r = 1;
> break;
> case KVM_CAP_SET_GUEST_DEBUG2:
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 48d7c372a4cd..499b131f794e 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1642,8 +1642,8 @@ static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>
> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> struct kvm_s2_trans *nested,
> - struct kvm_memory_slot *memslot, unsigned long hva,
> - bool fault_is_perm)
> + struct kvm_memory_slot *memslot, unsigned long *page_size,
> + unsigned long hva, bool fault_is_perm)
> {
> int ret = 0;
> bool topup_memcache;
> @@ -1923,6 +1923,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> kvm_release_faultin_page(kvm, page, !!ret, writable);
> kvm_fault_unlock(kvm);
>
> + if (page_size)
> + *page_size = vma_pagesize;
> +
> /* Mark the page dirty only if the fault is handled successfully */
> if (writable && !ret)
> mark_page_dirty_in_slot(kvm, memslot, gfn);
> @@ -2196,8 +2199,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> ret = gmem_abort(vcpu, fault_ipa, nested, memslot,
> esr_fsc_is_permission_fault(esr));
> else
> - ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
> - esr_fsc_is_permission_fault(esr));
> + ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, NULL,
> + hva, esr_fsc_is_permission_fault(esr));
> if (ret == 0)
> ret = 1;
> out:
> @@ -2573,3 +2576,71 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
>
> trace_kvm_toggle_cache(*vcpu_pc(vcpu), was_enabled, now_enabled);
> }
> +
> +long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
> + struct kvm_pre_fault_memory *range)
> +{
> + struct kvm_vcpu_fault_info *fault_info = &vcpu->arch.fault;
> + struct kvm_s2_trans nested_trans, *nested = NULL;
> + unsigned long page_size = PAGE_SIZE;
> + struct kvm_memory_slot *memslot;
> + phys_addr_t ipa = range->gpa;
> + phys_addr_t end;
> + hva_t hva;
> + gfn_t gfn;
> + int ret;
> +
> + if (vcpu_is_protected(vcpu))
> + return -EOPNOTSUPP;
This feels pretty odd. If you have advertised the capability, then
saying "not supported" at this stage is not on.
> +
> + /*
> + * We may prefault on a shadow stage 2 page table if we are
> + * running a nested guest. In this case, we have to resolve the L2
> + * IPA to the L1 IPA first, before knowing what kind of memory should
> + * back the L1 IPA.
> + *
> + * If the shadow stage 2 page table walk faults, then we return
> + * -EFAULT
> + */
> + if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
> + vcpu->arch.hw_mmu->nested_stage2_enabled) {
> + ret = kvm_walk_nested_s2(vcpu, ipa, &nested_trans);
> + if (ret)
> + return -EFAULT;
And then what? Userspace is completely screwed here, with no way to
make any forward progress, because the L1 is in charge of that S2, and
L1 is not running. What's the outcome? Light a candle and pray?
Also, the IPA you are passing as a parameter means absolutely nothing
in the context of L2. Userspace doesn't have the faintest clue about
the memory map presented to L2, as that's L1 business. L1 can
absolutely present to L2 a memory map that doesn't have a single
address in common with its own.
So this really doesn't work at all.
> +
> + ipa = kvm_s2_trans_output(&nested_trans);
> + nested = &nested_trans;
> + }
> +
> + if (ipa >= kvm_phys_size(vcpu->arch.hw_mmu))
> + return -ENOENT;
> +
> + /* Generate a synthetic abort for the pre-fault address */
> + fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
> + ESR_ELx_FSC_FAULT_L(KVM_PGTABLE_LAST_LEVEL);
Why level 3? You must present a fault that matches the level at which
the emulated fault would actually occur, because the rest of the
infrastructure relies on that (at least on the permission path, and
more to come).
Taking a step back on all this, 90% of the problems are there because
you are trying to support prefaulting a guest that is already running.
If you limited this to actually *pre*-faulting the guest, it would be
the easiest thing ever, and wouldn't suffer from any of the above (you
can't be in a nested context if you haven't run).
What prevents you from doing so? I'm perfectly happy to make this a
separate API if this contradicts other implementations. Or are you
relying on other side effects of the "already running" state?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation
2026-01-15 9:51 ` Marc Zyngier
@ 2026-01-16 14:33 ` Thomson, Jack
2026-01-18 10:29 ` Marc Zyngier
0 siblings, 1 reply; 8+ messages in thread
From: Thomson, Jack @ 2026-01-16 14:33 UTC (permalink / raw)
To: Marc Zyngier
Cc: oliver.upton, pbonzini, joey.gouly, suzuki.poulose, yuzenghui,
catalin.marinas, will, shuah, linux-arm-kernel, kvmarm,
linux-kernel, linux-kselftest, isaku.yamahata, xmarcalx, kalyazin,
jackabt, Vladimir Murzin
Hey Marc,
Thanks for the review.
On 15/01/2026 9:51 am, Marc Zyngier wrote:
> [+ Vladimir, who was also looking at this patch]
>
> On Tue, 13 Jan 2026 15:26:40 +0000,
> Jack Thomson <jackabt.amazon@gmail.com> wrote:
>>
>> From: Jack Thomson <jackabt@amazon.com>
>>
>> Add kvm_arch_vcpu_pre_fault_memory() for arm64. The implementation hands
>> off the stage-2 faulting logic to either gmem_abort() or
>> user_mem_abort().
>>
>> Add an optional page_size output parameter to user_mem_abort() to
>> return the VMA page size, which is needed when pre-faulting.
>>
>> Update the documentation to clarify x86 specific behaviour.
>>
>> Signed-off-by: Jack Thomson <jackabt@amazon.com>
>> ---
>> Documentation/virt/kvm/api.rst | 3 +-
>> arch/arm64/kvm/Kconfig | 1 +
>> arch/arm64/kvm/arm.c | 1 +
>> arch/arm64/kvm/mmu.c | 79 ++++++++++++++++++++++++++++++++--
>> 4 files changed, 79 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
>> index 01a3abef8abb..44cfd9e736bb 100644
>> --- a/Documentation/virt/kvm/api.rst
>> +++ b/Documentation/virt/kvm/api.rst
>> @@ -6493,7 +6493,8 @@ Errors:
>> KVM_PRE_FAULT_MEMORY populates KVM's stage-2 page tables used to map memory
>> for the current vCPU state. KVM maps memory as if the vCPU generated a
>> stage-2 read page fault, e.g. faults in memory as needed, but doesn't break
>> -CoW. However, KVM does not mark any newly created stage-2 PTE as Accessed.
>> +CoW. However, on x86, KVM does not mark any newly created stage-2 PTE as
>> +Accessed.
>>
>> In the case of confidential VM types where there is an initial set up of
>> private guest memory before the guest is 'finalized'/measured, this ioctl
>> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
>> index 4f803fd1c99a..6872aaabe16c 100644
>> --- a/arch/arm64/kvm/Kconfig
>> +++ b/arch/arm64/kvm/Kconfig
>> @@ -25,6 +25,7 @@ menuconfig KVM
>> select HAVE_KVM_CPU_RELAX_INTERCEPT
>> select KVM_MMIO
>> select KVM_GENERIC_DIRTYLOG_READ_PROTECT
>> + select KVM_GENERIC_PRE_FAULT_MEMORY
>> select VIRT_XFER_TO_GUEST_WORK
>> select KVM_VFIO
>> select HAVE_KVM_DIRTY_RING_ACQ_REL
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 4f80da0c0d1d..19bac68f737f 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -332,6 +332,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>> case KVM_CAP_COUNTER_OFFSET:
>> case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
>> case KVM_CAP_ARM_SEA_TO_USER:
>> + case KVM_CAP_PRE_FAULT_MEMORY:
>> r = 1;
>> break;
>> case KVM_CAP_SET_GUEST_DEBUG2:
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 48d7c372a4cd..499b131f794e 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1642,8 +1642,8 @@ static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>
>> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>> struct kvm_s2_trans *nested,
>> - struct kvm_memory_slot *memslot, unsigned long hva,
>> - bool fault_is_perm)
>> + struct kvm_memory_slot *memslot, unsigned long *page_size,
>> + unsigned long hva, bool fault_is_perm)
>> {
>> int ret = 0;
>> bool topup_memcache;
>> @@ -1923,6 +1923,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>> kvm_release_faultin_page(kvm, page, !!ret, writable);
>> kvm_fault_unlock(kvm);
>>
>> + if (page_size)
>> + *page_size = vma_pagesize;
>> +
>> /* Mark the page dirty only if the fault is handled successfully */
>> if (writable && !ret)
>> mark_page_dirty_in_slot(kvm, memslot, gfn);
>> @@ -2196,8 +2199,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>> ret = gmem_abort(vcpu, fault_ipa, nested, memslot,
>> esr_fsc_is_permission_fault(esr));
>> else
>> - ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
>> - esr_fsc_is_permission_fault(esr));
>> + ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, NULL,
>> + hva, esr_fsc_is_permission_fault(esr));
>> if (ret == 0)
>> ret = 1;
>> out:
>> @@ -2573,3 +2576,71 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
>>
>> trace_kvm_toggle_cache(*vcpu_pc(vcpu), was_enabled, now_enabled);
>> }
>> +
>> +long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
>> + struct kvm_pre_fault_memory *range)
>> +{
>> + struct kvm_vcpu_fault_info *fault_info = &vcpu->arch.fault;
>> + struct kvm_s2_trans nested_trans, *nested = NULL;
>> + unsigned long page_size = PAGE_SIZE;
>> + struct kvm_memory_slot *memslot;
>> + phys_addr_t ipa = range->gpa;
>> + phys_addr_t end;
>> + hva_t hva;
>> + gfn_t gfn;
>> + int ret;
>> +
>> + if (vcpu_is_protected(vcpu))
>> + return -EOPNOTSUPP;
>
> This feels pretty odd. If you have advertised the capability, then
> saying "not supported" at this stage is not on.
>
Thanks good point, I think I can actually just drop this completely since
kvm_pvm_ext_allowed() would already exclude this as a capacility.
>> +
>> + /*
>> + * We may prefault on a shadow stage 2 page table if we are
>> + * running a nested guest. In this case, we have to resolve the L2
>> + * IPA to the L1 IPA first, before knowing what kind of memory should
>> + * back the L1 IPA.
>> + *
>> + * If the shadow stage 2 page table walk faults, then we return
>> + * -EFAULT
>> + */
>> + if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
>> + vcpu->arch.hw_mmu->nested_stage2_enabled) {
>> + ret = kvm_walk_nested_s2(vcpu, ipa, &nested_trans);
>> + if (ret)
>> + return -EFAULT;
>
> And then what? Userspace is completely screwed here, with no way to
> make any forward progress, because the L1 is in charge of that S2, and
> L1 is not running. What's the outcome? Light a candle and pray?
>
> Also, the IPA you are passing as a parameter means absolutely nothing
> in the context of L2. Userspace doesn't have the faintest clue about
> the memory map presented to L2, as that's L1 business. L1 can
> absolutely present to L2 a memory map that doesn't have a single
> address in common with its own.
>
> So this really doesn't work at all.
>
Would just returning -EOPNOTSUPP in this case like:
if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
vcpu->arch.hw_mmu->nested_stage2_enabled)
return -EOPNOTSUPP;
be the best way to continue for now?
>> +
>> + ipa = kvm_s2_trans_output(&nested_trans);
>> + nested = &nested_trans;
>> + }
>> +
>> + if (ipa >= kvm_phys_size(vcpu->arch.hw_mmu))
>> + return -ENOENT;
>> +
>> + /* Generate a synthetic abort for the pre-fault address */
>> + fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
>> + ESR_ELx_FSC_FAULT_L(KVM_PGTABLE_LAST_LEVEL);
>
> Why level 3? You must present a fault that matches the level at which
> the emulated fault would actually occur, because the rest of the
> infrastructure relies on that (at least on the permission path, and
> more to come).
>
Ack, thanks I was relying on the fact `fault_is_perm` was hardcoded to
false. I'll replace with something like:
pgt = vcpu->arch.hw_mmu->pgt;
ret = kvm_pgtable_get_leaf(pgt, gpa, &pte, &level);
if (ret)
return ret;
fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
ESR_ELx_FSC_FAULT_L(level);
fault_info->hpfar_el2 = HPFAR_EL2_NS |
FIELD_PREP(HPFAR_EL2_FIPA, gpa >> 12);
> Taking a step back on all this, 90% of the problems are there because
> you are trying to support prefaulting a guest that is already running.
> If you limited this to actually *pre*-faulting the guest, it would be
> the easiest thing ever, and wouldn't suffer from any of the above (you
> can't be in a nested context if you haven't run).
>
> What prevents you from doing so? I'm perfectly happy to make this a
> separate API if this contradicts other implementations. Or are you
> relying on other side effects of the "already running" state?
We would need this to work on an already running guest.
>
> Thanks,
>
> M.
Thanks again for taking a look!
--
Thanks,
Jack
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation
2026-01-16 14:33 ` Thomson, Jack
@ 2026-01-18 10:29 ` Marc Zyngier
2026-01-19 11:10 ` Thomson, Jack
0 siblings, 1 reply; 8+ messages in thread
From: Marc Zyngier @ 2026-01-18 10:29 UTC (permalink / raw)
To: Thomson, Jack
Cc: oliver.upton, pbonzini, joey.gouly, suzuki.poulose, yuzenghui,
catalin.marinas, will, shuah, linux-arm-kernel, kvmarm,
linux-kernel, linux-kselftest, isaku.yamahata, xmarcalx, kalyazin,
jackabt, Vladimir Murzin
On Fri, 16 Jan 2026 14:33:42 +0000,
"Thomson, Jack" <jackabt.amazon@gmail.com> wrote:
>
>
> Hey Marc,
>
> Thanks for the review.
>
> On 15/01/2026 9:51 am, Marc Zyngier wrote:
> > [+ Vladimir, who was also looking at this patch]
> >
> > On Tue, 13 Jan 2026 15:26:40 +0000,
> > Jack Thomson <jackabt.amazon@gmail.com> wrote:
> >>
> >> From: Jack Thomson <jackabt@amazon.com>
> >>
> >> Add kvm_arch_vcpu_pre_fault_memory() for arm64. The implementation hands
> >> off the stage-2 faulting logic to either gmem_abort() or
> >> user_mem_abort().
> >>
> >> Add an optional page_size output parameter to user_mem_abort() to
> >> return the VMA page size, which is needed when pre-faulting.
> >>
> >> Update the documentation to clarify x86 specific behaviour.
> >>
> >> Signed-off-by: Jack Thomson <jackabt@amazon.com>
> >> ---
> >> Documentation/virt/kvm/api.rst | 3 +-
> >> arch/arm64/kvm/Kconfig | 1 +
> >> arch/arm64/kvm/arm.c | 1 +
> >> arch/arm64/kvm/mmu.c | 79 ++++++++++++++++++++++++++++++++--
> >> 4 files changed, 79 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> >> index 01a3abef8abb..44cfd9e736bb 100644
> >> --- a/Documentation/virt/kvm/api.rst
> >> +++ b/Documentation/virt/kvm/api.rst
> >> @@ -6493,7 +6493,8 @@ Errors:
> >> KVM_PRE_FAULT_MEMORY populates KVM's stage-2 page tables used to map memory
> >> for the current vCPU state. KVM maps memory as if the vCPU generated a
> >> stage-2 read page fault, e.g. faults in memory as needed, but doesn't break
> >> -CoW. However, KVM does not mark any newly created stage-2 PTE as Accessed.
> >> +CoW. However, on x86, KVM does not mark any newly created stage-2 PTE as
> >> +Accessed.
> >> In the case of confidential VM types where there is an initial
> >> set up of
> >> private guest memory before the guest is 'finalized'/measured, this ioctl
> >> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> >> index 4f803fd1c99a..6872aaabe16c 100644
> >> --- a/arch/arm64/kvm/Kconfig
> >> +++ b/arch/arm64/kvm/Kconfig
> >> @@ -25,6 +25,7 @@ menuconfig KVM
> >> select HAVE_KVM_CPU_RELAX_INTERCEPT
> >> select KVM_MMIO
> >> select KVM_GENERIC_DIRTYLOG_READ_PROTECT
> >> + select KVM_GENERIC_PRE_FAULT_MEMORY
> >> select VIRT_XFER_TO_GUEST_WORK
> >> select KVM_VFIO
> >> select HAVE_KVM_DIRTY_RING_ACQ_REL
> >> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> >> index 4f80da0c0d1d..19bac68f737f 100644
> >> --- a/arch/arm64/kvm/arm.c
> >> +++ b/arch/arm64/kvm/arm.c
> >> @@ -332,6 +332,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> >> case KVM_CAP_COUNTER_OFFSET:
> >> case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
> >> case KVM_CAP_ARM_SEA_TO_USER:
> >> + case KVM_CAP_PRE_FAULT_MEMORY:
> >> r = 1;
> >> break;
> >> case KVM_CAP_SET_GUEST_DEBUG2:
> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> >> index 48d7c372a4cd..499b131f794e 100644
> >> --- a/arch/arm64/kvm/mmu.c
> >> +++ b/arch/arm64/kvm/mmu.c
> >> @@ -1642,8 +1642,8 @@ static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t
> >> fault_ipa,
> >> struct kvm_s2_trans *nested,
> >> - struct kvm_memory_slot *memslot, unsigned long hva,
> >> - bool fault_is_perm)
> >> + struct kvm_memory_slot *memslot, unsigned long *page_size,
> >> + unsigned long hva, bool fault_is_perm)
> >> {
> >> int ret = 0;
> >> bool topup_memcache;
> >> @@ -1923,6 +1923,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >> kvm_release_faultin_page(kvm, page, !!ret, writable);
> >> kvm_fault_unlock(kvm);
> >> + if (page_size)
> >> + *page_size = vma_pagesize;
> >> +
> >> /* Mark the page dirty only if the fault is handled successfully */
> >> if (writable && !ret)
> >> mark_page_dirty_in_slot(kvm, memslot, gfn);
> >> @@ -2196,8 +2199,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >> ret = gmem_abort(vcpu, fault_ipa, nested, memslot,
> >> esr_fsc_is_permission_fault(esr));
> >> else
> >> - ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
> >> - esr_fsc_is_permission_fault(esr));
> >> + ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, NULL,
> >> + hva, esr_fsc_is_permission_fault(esr));
> >> if (ret == 0)
> >> ret = 1;
> >> out:
> >> @@ -2573,3 +2576,71 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
> >> trace_kvm_toggle_cache(*vcpu_pc(vcpu), was_enabled,
> >> now_enabled);
> >> }
> >> +
> >> +long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
> >> + struct kvm_pre_fault_memory *range)
> >> +{
> >> + struct kvm_vcpu_fault_info *fault_info = &vcpu->arch.fault;
> >> + struct kvm_s2_trans nested_trans, *nested = NULL;
> >> + unsigned long page_size = PAGE_SIZE;
> >> + struct kvm_memory_slot *memslot;
> >> + phys_addr_t ipa = range->gpa;
> >> + phys_addr_t end;
> >> + hva_t hva;
> >> + gfn_t gfn;
> >> + int ret;
> >> +
> >> + if (vcpu_is_protected(vcpu))
> >> + return -EOPNOTSUPP;
> >
> > This feels pretty odd. If you have advertised the capability, then
> > saying "not supported" at this stage is not on.
> >
>
> Thanks good point, I think I can actually just drop this completely since
> kvm_pvm_ext_allowed() would already exclude this as a capacility.
>
I think you still need some runtime handling, just in case userspace
is acting silly.
> >> +
> >> + /*
> >> + * We may prefault on a shadow stage 2 page table if we are
> >> + * running a nested guest. In this case, we have to resolve the L2
> >> + * IPA to the L1 IPA first, before knowing what kind of memory should
> >> + * back the L1 IPA.
> >> + *
> >> + * If the shadow stage 2 page table walk faults, then we return
> >> + * -EFAULT
> >> + */
> >> + if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
> >> + vcpu->arch.hw_mmu->nested_stage2_enabled) {
> >> + ret = kvm_walk_nested_s2(vcpu, ipa, &nested_trans);
> >> + if (ret)
> >> + return -EFAULT;
> >
> > And then what? Userspace is completely screwed here, with no way to
> > make any forward progress, because the L1 is in charge of that S2, and
> > L1 is not running. What's the outcome? Light a candle and pray?
> >
> > Also, the IPA you are passing as a parameter means absolutely nothing
> > in the context of L2. Userspace doesn't have the faintest clue about
> > the memory map presented to L2, as that's L1 business. L1 can
> > absolutely present to L2 a memory map that doesn't have a single
> > address in common with its own.
> >
> > So this really doesn't work at all.
> >
> Would just returning -EOPNOTSUPP in this case like:
Absolutely *not*. Userspace has no idea what the guest is doing, and
cannot influence it (other than disabling nesting altogether). This is
just as bad a -EFAULT.
>
> if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
> vcpu->arch.hw_mmu->nested_stage2_enabled)
> return -EOPNOTSUPP;
>
> be the best way to continue for now?
We both know that what you actually mean is "this doesn't match my use
case, let someone else deal with it". To which my answer is that you
either fully support pre-faulting, or you don't at all. There is no
middle ground.
> >> +
> >> + ipa = kvm_s2_trans_output(&nested_trans);
> >> + nested = &nested_trans;
> >> + }
> >> +
> >> + if (ipa >= kvm_phys_size(vcpu->arch.hw_mmu))
> >> + return -ENOENT;
> >> +
> >> + /* Generate a synthetic abort for the pre-fault address */
> >> + fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
> >> + ESR_ELx_FSC_FAULT_L(KVM_PGTABLE_LAST_LEVEL);
> >
> > Why level 3? You must present a fault that matches the level at which
> > the emulated fault would actually occur, because the rest of the
> > infrastructure relies on that (at least on the permission path, and
> > more to come).
> >
>
> Ack, thanks I was relying on the fact `fault_is_perm` was hardcoded to
> false. I'll replace with something like:
>
> pgt = vcpu->arch.hw_mmu->pgt;
> ret = kvm_pgtable_get_leaf(pgt, gpa, &pte, &level);
> if (ret)
> return ret;
>
> fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
> ESR_ELx_FSC_FAULT_L(level);
> fault_info->hpfar_el2 = HPFAR_EL2_NS |
> FIELD_PREP(HPFAR_EL2_FIPA, gpa >> 12);
If a mapping exists, you probably don't want to replay the fault. And
this needs to occur while the mmu_lock is held.
>
> > Taking a step back on all this, 90% of the problems are there because
> > you are trying to support prefaulting a guest that is already running.
> > If you limited this to actually *pre*-faulting the guest, it would be
> > the easiest thing ever, and wouldn't suffer from any of the above (you
> > can't be in a nested context if you haven't run).
> >
> > What prevents you from doing so? I'm perfectly happy to make this a
> > separate API if this contradicts other implementations. Or are you
> > relying on other side effects of the "already running" state?
>
> We would need this to work on an already running guest.
Then you need to fully support pre-faulting the guest even when it is
in a nested context, without resorting to coping-out in situations
that do not match your narrow use-case. Which means populating the
canonical s2_mmu even when you're not in that particular context.
Thanks,
M.
--
Jazz isn't dead. It just smells funny.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation
2026-01-18 10:29 ` Marc Zyngier
@ 2026-01-19 11:10 ` Thomson, Jack
0 siblings, 0 replies; 8+ messages in thread
From: Thomson, Jack @ 2026-01-19 11:10 UTC (permalink / raw)
To: Marc Zyngier
Cc: oliver.upton, pbonzini, joey.gouly, suzuki.poulose, yuzenghui,
catalin.marinas, will, shuah, linux-arm-kernel, kvmarm,
linux-kernel, linux-kselftest, isaku.yamahata, xmarcalx, kalyazin,
jackabt, Vladimir Murzin
Hi Marc,
On 18/01/2026 10:29 am, Marc Zyngier wrote:
> On Fri, 16 Jan 2026 14:33:42 +0000,
> "Thomson, Jack" <jackabt.amazon@gmail.com> wrote:
>>
>>
>> Hey Marc,
>>
>> Thanks for the review.
>>
>> On 15/01/2026 9:51 am, Marc Zyngier wrote:
>>> [+ Vladimir, who was also looking at this patch]
>>>
>>> On Tue, 13 Jan 2026 15:26:40 +0000,
>>> Jack Thomson <jackabt.amazon@gmail.com> wrote:
>>>>
>>>> From: Jack Thomson <jackabt@amazon.com>
>>>>
>>>> Add kvm_arch_vcpu_pre_fault_memory() for arm64. The implementation hands
>>>> off the stage-2 faulting logic to either gmem_abort() or
>>>> user_mem_abort().
>>>>
>>>> Add an optional page_size output parameter to user_mem_abort() to
>>>> return the VMA page size, which is needed when pre-faulting.
>>>>
>>>> Update the documentation to clarify x86 specific behaviour.
>>>>
>>>> Signed-off-by: Jack Thomson <jackabt@amazon.com>
>>>> ---
>>>> Documentation/virt/kvm/api.rst | 3 +-
>>>> arch/arm64/kvm/Kconfig | 1 +
>>>> arch/arm64/kvm/arm.c | 1 +
>>>> arch/arm64/kvm/mmu.c | 79 ++++++++++++++++++++++++++++++++--
>>>> 4 files changed, 79 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
>>>> index 01a3abef8abb..44cfd9e736bb 100644
>>>> --- a/Documentation/virt/kvm/api.rst
>>>> +++ b/Documentation/virt/kvm/api.rst
>>>> @@ -6493,7 +6493,8 @@ Errors:
>>>> KVM_PRE_FAULT_MEMORY populates KVM's stage-2 page tables used to map memory
>>>> for the current vCPU state. KVM maps memory as if the vCPU generated a
>>>> stage-2 read page fault, e.g. faults in memory as needed, but doesn't break
>>>> -CoW. However, KVM does not mark any newly created stage-2 PTE as Accessed.
>>>> +CoW. However, on x86, KVM does not mark any newly created stage-2 PTE as
>>>> +Accessed.
>>>> In the case of confidential VM types where there is an initial
>>>> set up of
>>>> private guest memory before the guest is 'finalized'/measured, this ioctl
>>>> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
>>>> index 4f803fd1c99a..6872aaabe16c 100644
>>>> --- a/arch/arm64/kvm/Kconfig
>>>> +++ b/arch/arm64/kvm/Kconfig
>>>> @@ -25,6 +25,7 @@ menuconfig KVM
>>>> select HAVE_KVM_CPU_RELAX_INTERCEPT
>>>> select KVM_MMIO
>>>> select KVM_GENERIC_DIRTYLOG_READ_PROTECT
>>>> + select KVM_GENERIC_PRE_FAULT_MEMORY
>>>> select VIRT_XFER_TO_GUEST_WORK
>>>> select KVM_VFIO
>>>> select HAVE_KVM_DIRTY_RING_ACQ_REL
>>>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>>>> index 4f80da0c0d1d..19bac68f737f 100644
>>>> --- a/arch/arm64/kvm/arm.c
>>>> +++ b/arch/arm64/kvm/arm.c
>>>> @@ -332,6 +332,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>>> case KVM_CAP_COUNTER_OFFSET:
>>>> case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
>>>> case KVM_CAP_ARM_SEA_TO_USER:
>>>> + case KVM_CAP_PRE_FAULT_MEMORY:
>>>> r = 1;
>>>> break;
>>>> case KVM_CAP_SET_GUEST_DEBUG2:
>>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>>> index 48d7c372a4cd..499b131f794e 100644
>>>> --- a/arch/arm64/kvm/mmu.c
>>>> +++ b/arch/arm64/kvm/mmu.c
>>>> @@ -1642,8 +1642,8 @@ static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t
>>>> fault_ipa,
>>>> struct kvm_s2_trans *nested,
>>>> - struct kvm_memory_slot *memslot, unsigned long hva,
>>>> - bool fault_is_perm)
>>>> + struct kvm_memory_slot *memslot, unsigned long *page_size,
>>>> + unsigned long hva, bool fault_is_perm)
>>>> {
>>>> int ret = 0;
>>>> bool topup_memcache;
>>>> @@ -1923,6 +1923,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>> kvm_release_faultin_page(kvm, page, !!ret, writable);
>>>> kvm_fault_unlock(kvm);
>>>> + if (page_size)
>>>> + *page_size = vma_pagesize;
>>>> +
>>>> /* Mark the page dirty only if the fault is handled successfully */
>>>> if (writable && !ret)
>>>> mark_page_dirty_in_slot(kvm, memslot, gfn);
>>>> @@ -2196,8 +2199,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>>>> ret = gmem_abort(vcpu, fault_ipa, nested, memslot,
>>>> esr_fsc_is_permission_fault(esr));
>>>> else
>>>> - ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
>>>> - esr_fsc_is_permission_fault(esr));
>>>> + ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, NULL,
>>>> + hva, esr_fsc_is_permission_fault(esr));
>>>> if (ret == 0)
>>>> ret = 1;
>>>> out:
>>>> @@ -2573,3 +2576,71 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
>>>> trace_kvm_toggle_cache(*vcpu_pc(vcpu), was_enabled,
>>>> now_enabled);
>>>> }
>>>> +
>>>> +long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
>>>> + struct kvm_pre_fault_memory *range)
>>>> +{
>>>> + struct kvm_vcpu_fault_info *fault_info = &vcpu->arch.fault;
>>>> + struct kvm_s2_trans nested_trans, *nested = NULL;
>>>> + unsigned long page_size = PAGE_SIZE;
>>>> + struct kvm_memory_slot *memslot;
>>>> + phys_addr_t ipa = range->gpa;
>>>> + phys_addr_t end;
>>>> + hva_t hva;
>>>> + gfn_t gfn;
>>>> + int ret;
>>>> +
>>>> + if (vcpu_is_protected(vcpu))
>>>> + return -EOPNOTSUPP;
>>>
>>> This feels pretty odd. If you have advertised the capability, then
>>> saying "not supported" at this stage is not on.
>>>
>>
>> Thanks good point, I think I can actually just drop this completely since
>> kvm_pvm_ext_allowed() would already exclude this as a capacility.
>>
>
> I think you still need some runtime handling, just in case userspace
> is acting silly.
>
Yeah makes sense I'll put something in.
>>>> +
>>>> + /*
>>>> + * We may prefault on a shadow stage 2 page table if we are
>>>> + * running a nested guest. In this case, we have to resolve the L2
>>>> + * IPA to the L1 IPA first, before knowing what kind of memory should
>>>> + * back the L1 IPA.
>>>> + *
>>>> + * If the shadow stage 2 page table walk faults, then we return
>>>> + * -EFAULT
>>>> + */
>>>> + if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
>>>> + vcpu->arch.hw_mmu->nested_stage2_enabled) {
>>>> + ret = kvm_walk_nested_s2(vcpu, ipa, &nested_trans);
>>>> + if (ret)
>>>> + return -EFAULT;
>>>
>>> And then what? Userspace is completely screwed here, with no way to
>>> make any forward progress, because the L1 is in charge of that S2, and
>>> L1 is not running. What's the outcome? Light a candle and pray?
>>>
>>> Also, the IPA you are passing as a parameter means absolutely nothing
>>> in the context of L2. Userspace doesn't have the faintest clue about
>>> the memory map presented to L2, as that's L1 business. L1 can
>>> absolutely present to L2 a memory map that doesn't have a single
>>> address in common with its own.
>>>
>>> So this really doesn't work at all.
>>>
>> Would just returning -EOPNOTSUPP in this case like:
>
> Absolutely *not*. Userspace has no idea what the guest is doing, and
> cannot influence it (other than disabling nesting altogether). This is
> just as bad a -EFAULT.
>
>>
>> if (kvm_is_nested_s2_mmu(vcpu->kvm, vcpu->arch.hw_mmu) &&
>> vcpu->arch.hw_mmu->nested_stage2_enabled)
>> return -EOPNOTSUPP;
>>
>> be the best way to continue for now?
>
> We both know that what you actually mean is "this doesn't match my use
> case, let someone else deal with it". To which my answer is that you
> either fully support pre-faulting, or you don't at all. There is no
> middle ground.
>
Sorry if it came across as a cop-out, I think I just misunderstood your
earlier comment about taking a step back and looking at an easier
approach. If this is required I'll definitely look at full support for
pre-faulting.
>>>> +
>>>> + ipa = kvm_s2_trans_output(&nested_trans);
>>>> + nested = &nested_trans;
>>>> + }
>>>> +
>>>> + if (ipa >= kvm_phys_size(vcpu->arch.hw_mmu))
>>>> + return -ENOENT;
>>>> +
>>>> + /* Generate a synthetic abort for the pre-fault address */
>>>> + fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
>>>> + ESR_ELx_FSC_FAULT_L(KVM_PGTABLE_LAST_LEVEL);
>>>
>>> Why level 3? You must present a fault that matches the level at which
>>> the emulated fault would actually occur, because the rest of the
>>> infrastructure relies on that (at least on the permission path, and
>>> more to come).
>>>
>>
>> Ack, thanks I was relying on the fact `fault_is_perm` was hardcoded to
>> false. I'll replace with something like:
>>
>> pgt = vcpu->arch.hw_mmu->pgt;
>> ret = kvm_pgtable_get_leaf(pgt, gpa, &pte, &level);
>> if (ret)
>> return ret;
>>
>> fault_info->esr_el2 = (ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT) |
>> ESR_ELx_FSC_FAULT_L(level);
>> fault_info->hpfar_el2 = HPFAR_EL2_NS |
>> FIELD_PREP(HPFAR_EL2_FIPA, gpa >> 12);
>
> If a mapping exists, you probably don't want to replay the fault. And
> this needs to occur while the mmu_lock is held.
>
Got you thanks.
>>
>>> Taking a step back on all this, 90% of the problems are there because
>>> you are trying to support prefaulting a guest that is already running.
>>> If you limited this to actually *pre*-faulting the guest, it would be
>>> the easiest thing ever, and wouldn't suffer from any of the above (you
>>> can't be in a nested context if you haven't run).
>>>
>>> What prevents you from doing so? I'm perfectly happy to make this a
>>> separate API if this contradicts other implementations. Or are you
>>> relying on other side effects of the "already running" state?
>>
>> We would need this to work on an already running guest.
>
> Then you need to fully support pre-faulting the guest even when it is
> in a nested context, without resorting to coping-out in situations
> that do not match your narrow use-case. Which means populating the
> canonical s2_mmu even when you're not in that particular context.
Will do, I'll work on a next revision with this case.
Thanks again for looking.
Jack
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-01-19 11:10 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-13 15:26 [PATCH v4 0/3] KVM ARM64 pre_fault_memory Jack Thomson
2026-01-13 15:26 ` [PATCH v4 1/3] KVM: arm64: Add pre_fault_memory implementation Jack Thomson
2026-01-15 9:51 ` Marc Zyngier
2026-01-16 14:33 ` Thomson, Jack
2026-01-18 10:29 ` Marc Zyngier
2026-01-19 11:10 ` Thomson, Jack
2026-01-13 15:26 ` [PATCH v4 2/3] KVM: selftests: Enable pre_fault_memory_test for arm64 Jack Thomson
2026-01-13 15:26 ` [PATCH v4 3/3] KVM: selftests: Add option for different backing in pre-fault tests Jack Thomson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox