From: Marc Zyngier <maz@kernel.org>
To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
kvm@vger.kernel.org
Cc: Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Oliver Upton <oupton@kernel.org>,
Zenghui Yu <yuzenghui@huawei.com>, Fuad Tabba <tabba@google.com>,
Will Deacon <will@kernel.org>,
Quentin Perret <qperret@google.com>
Subject: [PATCH v2 05/30] KVM: arm64: Extract stage-2 permission logic in user_mem_abort()
Date: Fri, 27 Mar 2026 11:35:53 +0000 [thread overview]
Message-ID: <20260327113618.4051534-6-maz@kernel.org> (raw)
In-Reply-To: <20260327113618.4051534-1-maz@kernel.org>
From: Fuad Tabba <tabba@google.com>
Extract the logic that computes the stage-2 protections and checks for
various permission faults (e.g., execution faults on non-cacheable
memory) into a new helper function, kvm_s2_fault_compute_prot(). This
helper also handles injecting atomic/exclusive faults back into the
guest when necessary.
This refactoring step separates the permission computation from the
mapping logic, making the main fault handler flow clearer.
Signed-off-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/mmu.c | 163 +++++++++++++++++++++++--------------------
1 file changed, 87 insertions(+), 76 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 1f2c2200ccd8d..d1ffdce18631a 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1809,6 +1809,89 @@ static int kvm_s2_fault_pin_pfn(struct kvm_s2_fault *fault)
return 1;
}
+static int kvm_s2_fault_compute_prot(struct kvm_s2_fault *fault)
+{
+ struct kvm *kvm = fault->vcpu->kvm;
+
+ /*
+ * Check if this is non-struct page memory PFN, and cannot support
+ * CMOs. It could potentially be unsafe to access as cacheable.
+ */
+ if (fault->vm_flags & (VM_PFNMAP | VM_MIXEDMAP) && !pfn_is_map_memory(fault->pfn)) {
+ if (fault->is_vma_cacheable) {
+ /*
+ * Whilst the VMA owner expects cacheable mapping to this
+ * PFN, hardware also has to support the FWB and CACHE DIC
+ * features.
+ *
+ * ARM64 KVM relies on kernel VA mapping to the PFN to
+ * perform cache maintenance as the CMO instructions work on
+ * virtual addresses. VM_PFNMAP region are not necessarily
+ * mapped to a KVA and hence the presence of hardware features
+ * S2FWB and CACHE DIC are mandatory to avoid the need for
+ * cache maintenance.
+ */
+ if (!kvm_supports_cacheable_pfnmap())
+ return -EFAULT;
+ } else {
+ /*
+ * If the page was identified as device early by looking at
+ * the VMA flags, vma_pagesize is already representing the
+ * largest quantity we can map. If instead it was mapped
+ * via __kvm_faultin_pfn(), vma_pagesize is set to PAGE_SIZE
+ * and must not be upgraded.
+ *
+ * In both cases, we don't let transparent_hugepage_adjust()
+ * change things at the last minute.
+ */
+ fault->s2_force_noncacheable = true;
+ }
+ } else if (fault->logging_active && !fault->write_fault) {
+ /*
+ * Only actually map the page as writable if this was a write
+ * fault.
+ */
+ fault->writable = false;
+ }
+
+ if (fault->exec_fault && fault->s2_force_noncacheable)
+ return -ENOEXEC;
+
+ /*
+ * Guest performs atomic/exclusive operations on memory with unsupported
+ * attributes (e.g. ld64b/st64b on normal memory when no FEAT_LS64WB)
+ * and trigger the exception here. Since the memslot is valid, inject
+ * the fault back to the guest.
+ */
+ if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(fault->vcpu))) {
+ kvm_inject_dabt_excl_atomic(fault->vcpu, kvm_vcpu_get_hfar(fault->vcpu));
+ return 1;
+ }
+
+ if (fault->nested)
+ adjust_nested_fault_perms(fault->nested, &fault->prot, &fault->writable);
+
+ if (fault->writable)
+ fault->prot |= KVM_PGTABLE_PROT_W;
+
+ if (fault->exec_fault)
+ fault->prot |= KVM_PGTABLE_PROT_X;
+
+ if (fault->s2_force_noncacheable) {
+ if (fault->vfio_allow_any_uc)
+ fault->prot |= KVM_PGTABLE_PROT_NORMAL_NC;
+ else
+ fault->prot |= KVM_PGTABLE_PROT_DEVICE;
+ } else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC)) {
+ fault->prot |= KVM_PGTABLE_PROT_X;
+ }
+
+ if (fault->nested)
+ adjust_nested_exec_perms(kvm, fault->nested, &fault->prot);
+
+ return 0;
+}
+
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_s2_trans *nested,
struct kvm_memory_slot *memslot, unsigned long hva,
@@ -1863,68 +1946,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
ret = 0;
- /*
- * Check if this is non-struct page memory PFN, and cannot support
- * CMOs. It could potentially be unsafe to access as cacheable.
- */
- if (fault->vm_flags & (VM_PFNMAP | VM_MIXEDMAP) && !pfn_is_map_memory(fault->pfn)) {
- if (fault->is_vma_cacheable) {
- /*
- * Whilst the VMA owner expects cacheable mapping to this
- * PFN, hardware also has to support the FWB and CACHE DIC
- * features.
- *
- * ARM64 KVM relies on kernel VA mapping to the PFN to
- * perform cache maintenance as the CMO instructions work on
- * virtual addresses. VM_PFNMAP region are not necessarily
- * mapped to a KVA and hence the presence of hardware features
- * S2FWB and CACHE DIC are mandatory to avoid the need for
- * cache maintenance.
- */
- if (!kvm_supports_cacheable_pfnmap())
- ret = -EFAULT;
- } else {
- /*
- * If the page was identified as device early by looking at
- * the VMA flags, fault->vma_pagesize is already representing the
- * largest quantity we can map. If instead it was mapped
- * via __kvm_faultin_pfn(), fault->vma_pagesize is set to PAGE_SIZE
- * and must not be upgraded.
- *
- * In both cases, we don't let transparent_hugepage_adjust()
- * change things at the last minute.
- */
- fault->s2_force_noncacheable = true;
- }
- } else if (fault->logging_active && !fault->write_fault) {
- /*
- * Only actually map the page as fault->writable if this was a write
- * fault.
- */
- fault->writable = false;
+ ret = kvm_s2_fault_compute_prot(fault);
+ if (ret == 1) {
+ ret = 1; /* fault injected */
+ goto out_put_page;
}
-
- if (fault->exec_fault && fault->s2_force_noncacheable)
- ret = -ENOEXEC;
-
if (ret)
goto out_put_page;
- /*
- * Guest performs atomic/exclusive operations on memory with unsupported
- * attributes (e.g. ld64b/st64b on normal memory when no FEAT_LS64WB)
- * and trigger the exception here. Since the fault->memslot is valid, inject
- * the fault back to the guest.
- */
- if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(fault->vcpu))) {
- kvm_inject_dabt_excl_atomic(fault->vcpu, kvm_vcpu_get_hfar(fault->vcpu));
- ret = 1;
- goto out_put_page;
- }
-
- if (fault->nested)
- adjust_nested_fault_perms(fault->nested, &fault->prot, &fault->writable);
-
kvm_fault_lock(kvm);
pgt = fault->vcpu->arch.hw_mmu->pgt;
if (mmu_invalidate_retry(kvm, fault->mmu_seq)) {
@@ -1961,24 +1990,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
}
- if (fault->writable)
- fault->prot |= KVM_PGTABLE_PROT_W;
-
- if (fault->exec_fault)
- fault->prot |= KVM_PGTABLE_PROT_X;
-
- if (fault->s2_force_noncacheable) {
- if (fault->vfio_allow_any_uc)
- fault->prot |= KVM_PGTABLE_PROT_NORMAL_NC;
- else
- fault->prot |= KVM_PGTABLE_PROT_DEVICE;
- } else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC)) {
- fault->prot |= KVM_PGTABLE_PROT_X;
- }
-
- if (fault->nested)
- adjust_nested_exec_perms(kvm, fault->nested, &fault->prot);
-
/*
* Under the premise of getting a FSC_PERM fault, we just need to relax
* permissions only if fault->vma_pagesize equals fault->fault_granule. Otherwise,
--
2.47.3
next prev parent reply other threads:[~2026-03-27 11:36 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-27 11:35 [PATCH v2 00/30] KVM: arm64: Combined user_mem_abort() rework Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 01/30] KVM: arm64: Extract VMA size resolution in user_mem_abort() Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 02/30] KVM: arm64: Introduce struct kvm_s2_fault to user_mem_abort() Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 03/30] KVM: arm64: Extract PFN resolution in user_mem_abort() Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 04/30] KVM: arm64: Isolate mmap_read_lock inside new kvm_s2_fault_get_vma_info() helper Marc Zyngier
2026-03-27 11:35 ` Marc Zyngier [this message]
2026-03-27 11:35 ` [PATCH v2 06/30] KVM: arm64: Extract page table mapping in user_mem_abort() Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 07/30] KVM: arm64: Simplify nested VMA shift calculation Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 08/30] KVM: arm64: Remove redundant state variables from struct kvm_s2_fault Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 09/30] KVM: arm64: Simplify return logic in user_mem_abort() Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 10/30] KVM: arm64: Initialize struct kvm_s2_fault completely at declaration Marc Zyngier
2026-03-27 11:35 ` [PATCH v2 11/30] KVM: arm64: Optimize early exit checks in kvm_s2_fault_pin_pfn() Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 12/30] KVM: arm64: Hoist MTE validation check out of MMU lock path Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 13/30] KVM: arm64: Clean up control flow in kvm_s2_fault_map() Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 14/30] KVM: arm64: Kill fault->ipa Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 15/30] KVM: arm64: Make fault_ipa immutable Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 16/30] KVM: arm64: Move fault context to const structure Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 17/30] KVM: arm64: Replace fault_is_perm with a helper Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 18/30] KVM: arm64: Constrain fault_granule to kvm_s2_fault_map() Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 19/30] KVM: arm64: Kill write_fault from kvm_s2_fault Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 20/30] KVM: arm64: Kill exec_fault " Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 21/30] KVM: arm64: Kill topup_memcache " Marc Zyngier
2026-03-27 14:49 ` Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 22/30] KVM: arm64: Move VMA-related information to kvm_s2_fault_vma_info Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 23/30] KVM: arm64: Kill logging_active from kvm_s2_fault Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 24/30] KVM: arm64: Restrict the scope of the 'writable' attribute Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 25/30] KVM: arm64: Move kvm_s2_fault.{pfn,page} to kvm_s2_vma_info Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 26/30] KVM: arm64: Replace force_pte with a max_map_size attribute Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 27/30] KVM: arm64: Move device mapping management into kvm_s2_fault_pin_pfn() Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 28/30] KVM: arm64: Directly expose mapping prot and kill kvm_s2_fault Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 29/30] KVM: arm64: Simplify integration of adjust_nested_*_perms() Marc Zyngier
2026-03-27 11:36 ` [PATCH v2 30/30] KVM: arm64: Convert gmem_abort() to struct kvm_s2_fault_desc Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260327113618.4051534-6-maz@kernel.org \
--to=maz@kernel.org \
--cc=joey.gouly@arm.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=oupton@kernel.org \
--cc=qperret@google.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox