From: Fuad Tabba <tabba@google.com>
To: kvm@vger.kernel.org, kvmarm@lists.linux.dev,
linux-arm-kernel@lists.infradead.org
Cc: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com,
suzuki.poulose@arm.com, yuzenghui@huawei.com,
catalin.marinas@arm.com, will@kernel.org, qperret@google.com,
vdonnefort@google.com, tabba@google.com
Subject: [PATCH v1 03/13] KVM: arm64: Extract PFN resolution in user_mem_abort()
Date: Fri, 6 Mar 2026 14:02:22 +0000 [thread overview]
Message-ID: <20260306140232.2193802-4-tabba@google.com> (raw)
In-Reply-To: <20260306140232.2193802-1-tabba@google.com>
Extract the section of code responsible for pinning the physical page
frame number (PFN) backing the faulting IPA into a new helper,
kvm_s2_fault_pin_pfn().
This helper encapsulates the critical section where the mmap_read_lock
is held, the VMA is looked up, the mmu invalidate sequence is sampled,
and the PFN is ultimately resolved via __kvm_faultin_pfn(). It also
handles the early exits for hardware poisoned pages and noslot PFNs.
By isolating this region, we can begin to organize the state variables
required for PFN resolution into the kvm_s2_fault struct, clearing out
a significant amount of local variable clutter from user_mem_abort().
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/mmu.c | 105 ++++++++++++++++++++++++-------------------
1 file changed, 59 insertions(+), 46 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 419b793c5891..d56c6422ca5f 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1740,55 +1740,11 @@ struct kvm_s2_fault {
vm_flags_t vm_flags;
};
-static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
- struct kvm_s2_trans *nested,
- struct kvm_memory_slot *memslot, unsigned long hva,
- bool fault_is_perm)
+static int kvm_s2_fault_pin_pfn(struct kvm_s2_fault *fault)
{
- int ret = 0;
- struct kvm_s2_fault fault_data = {
- .vcpu = vcpu,
- .fault_ipa = fault_ipa,
- .nested = nested,
- .memslot = memslot,
- .hva = hva,
- .fault_is_perm = fault_is_perm,
- .ipa = fault_ipa,
- .logging_active = memslot_is_logging(memslot),
- .force_pte = memslot_is_logging(memslot),
- .s2_force_noncacheable = false,
- .vfio_allow_any_uc = false,
- .prot = KVM_PGTABLE_PROT_R,
- };
- struct kvm_s2_fault *fault = &fault_data;
- struct kvm *kvm = vcpu->kvm;
struct vm_area_struct *vma;
- void *memcache;
- struct kvm_pgtable *pgt;
- enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
+ struct kvm *kvm = fault->vcpu->kvm;
- if (fault->fault_is_perm)
- fault->fault_granule = kvm_vcpu_trap_get_perm_fault_granule(fault->vcpu);
- fault->write_fault = kvm_is_write_fault(fault->vcpu);
- fault->exec_fault = kvm_vcpu_trap_is_exec_fault(fault->vcpu);
- VM_WARN_ON_ONCE(fault->write_fault && fault->exec_fault);
-
- /*
- * Permission faults just need to update the existing leaf entry,
- * and so normally don't require allocations from the memcache. The
- * only exception to this is when dirty logging is enabled at runtime
- * and a write fault needs to collapse a block entry into a table.
- */
- fault->topup_memcache = !fault->fault_is_perm ||
- (fault->logging_active && fault->write_fault);
- ret = prepare_mmu_memcache(fault->vcpu, fault->topup_memcache, &memcache);
- if (ret)
- return ret;
-
- /*
- * Let's check if we will get back a huge fault->page backed by hugetlbfs, or
- * get block mapping for device MMIO region.
- */
mmap_read_lock(current->mm);
vma = vma_lookup(current->mm, fault->hva);
if (unlikely(!vma)) {
@@ -1842,6 +1798,63 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (is_error_noslot_pfn(fault->pfn))
return -EFAULT;
+ return 1;
+}
+
+static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ struct kvm_s2_trans *nested,
+ struct kvm_memory_slot *memslot, unsigned long hva,
+ bool fault_is_perm)
+{
+ int ret = 0;
+ struct kvm_s2_fault fault_data = {
+ .vcpu = vcpu,
+ .fault_ipa = fault_ipa,
+ .nested = nested,
+ .memslot = memslot,
+ .hva = hva,
+ .fault_is_perm = fault_is_perm,
+ .ipa = fault_ipa,
+ .logging_active = memslot_is_logging(memslot),
+ .force_pte = memslot_is_logging(memslot),
+ .s2_force_noncacheable = false,
+ .vfio_allow_any_uc = false,
+ .prot = KVM_PGTABLE_PROT_R,
+ };
+ struct kvm_s2_fault *fault = &fault_data;
+ struct kvm *kvm = vcpu->kvm;
+ void *memcache;
+ struct kvm_pgtable *pgt;
+ enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED;
+
+ if (fault->fault_is_perm)
+ fault->fault_granule = kvm_vcpu_trap_get_perm_fault_granule(fault->vcpu);
+ fault->write_fault = kvm_is_write_fault(fault->vcpu);
+ fault->exec_fault = kvm_vcpu_trap_is_exec_fault(fault->vcpu);
+ VM_WARN_ON_ONCE(fault->write_fault && fault->exec_fault);
+
+ /*
+ * Permission faults just need to update the existing leaf entry,
+ * and so normally don't require allocations from the memcache. The
+ * only exception to this is when dirty logging is enabled at runtime
+ * and a write fault needs to collapse a block entry into a table.
+ */
+ fault->topup_memcache = !fault->fault_is_perm ||
+ (fault->logging_active && fault->write_fault);
+ ret = prepare_mmu_memcache(fault->vcpu, fault->topup_memcache, &memcache);
+ if (ret)
+ return ret;
+
+ /*
+ * Let's check if we will get back a huge fault->page backed by hugetlbfs, or
+ * get block mapping for device MMIO region.
+ */
+ ret = kvm_s2_fault_pin_pfn(fault);
+ if (ret != 1)
+ return ret;
+
+ ret = 0;
+
/*
* Check if this is non-struct fault->page memory PFN, and cannot support
* CMOs. It could potentially be unsafe to access as cacheable.
--
2.53.0.473.g4a7958ca14-goog
next prev parent reply other threads:[~2026-03-06 14:02 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-06 14:02 [PATCH v1 00/13] KVM: arm64: Refactor user_mem_abort() into a state-object model Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 01/13] KVM: arm64: Extract VMA size resolution in user_mem_abort() Fuad Tabba
2026-03-17 15:07 ` Joey Gouly
2026-03-06 14:02 ` [PATCH v1 02/13] KVM: arm64: Introduce struct kvm_s2_fault to user_mem_abort() Fuad Tabba
2026-03-17 16:00 ` Joey Gouly
2026-03-06 14:02 ` Fuad Tabba [this message]
2026-03-06 14:02 ` [PATCH v1 04/13] KVM: arm64: Isolate mmap_read_lock inside new kvm_s2_fault_get_vma_info() helper Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 05/13] KVM: arm64: Extract stage-2 permission logic in user_mem_abort() Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 06/13] KVM: arm64: Extract page table mapping " Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 07/13] KVM: arm64: Simplify nested VMA shift calculation Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 08/13] KVM: arm64: Remove redundant state variables from struct kvm_s2_fault Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 09/13] KVM: arm64: Simplify return logic in user_mem_abort() Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 10/13] KVM: arm64: Initialize struct kvm_s2_fault completely at declaration Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 11/13] KVM: arm64: Optimize early exit checks in kvm_s2_fault_pin_pfn() Fuad Tabba
2026-03-17 17:10 ` Joey Gouly
2026-03-06 14:02 ` [PATCH v1 12/13] KVM: arm64: Hoist MTE validation check out of MMU lock path Fuad Tabba
2026-03-06 14:02 ` [PATCH v1 13/13] KVM: arm64: Clean up control flow in kvm_s2_fault_map() Fuad Tabba
2026-03-06 15:34 ` [PATCH v1 00/13] KVM: arm64: Refactor user_mem_abort() into a state-object model Marc Zyngier
2026-03-06 15:44 ` Fuad Tabba
2026-03-16 18:13 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260306140232.2193802-4-tabba@google.com \
--to=tabba@google.com \
--cc=catalin.marinas@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=qperret@google.com \
--cc=suzuki.poulose@arm.com \
--cc=vdonnefort@google.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox