From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80C2F10ED65C for ; Fri, 27 Mar 2026 11:36:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=JwtBg3ZFqaW3vhsp3vzl6ZManqz7vt4WFJ9pDTTw6Nw=; b=kxx4AMZq+NyrdgPwBQIqSLhemJ rQDd0F5tfJunYq7pbLKRiPnpRwByPMeVDOoH23WjOlCtdlWUbJMRCIOmCr46s1UeX0vcvKjpjBEhY lSVn3EDRgFUVs0V6Uq3wrfRB8g3dVGxS4Nd7AdkYqvJvPhC+Iylqn29t7hvP1odHaXliovYg3pcGJ HiJW9tt6tdyKftBbWigCgJ5saxVzZHv//6PKA+nxNNRn9RNBGwynjXz1cY0+LAh8nIZT1cOywMm6f oyxkUXt7YWscFpN8r9ld2niZkR7eZKuWvAaW6/GqivJcc16b/AvzdZCGZ45bCw3ZuhjC7vHI6ZxVR dI5E+Xpg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1w65UJ-00000007F2z-1m9l; Fri, 27 Mar 2026 11:36:27 +0000 Received: from tor.source.kernel.org ([172.105.4.254]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1w65UI-00000007F0n-0Znp for linux-arm-kernel@lists.infradead.org; Fri, 27 Mar 2026 11:36:26 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 3D84361894; Fri, 27 Mar 2026 11:36:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2D77C2BC86; Fri, 27 Mar 2026 11:36:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774611385; bh=nImqymJlrNIGYpi4jnizRl5hbY+GuFgtkVOU4O61dV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PCHi1DkOK8FJrrMwWbAdocApd9eUx1RpVUMPNXFxFC25QddMlma2e4wUNg/TSI6tK Lv0M2l/aOuIHlDerGdZGcE+sTiaLdIaiiZDjmLf5kLiwcID5P5GSQLtqWOiZqe1V2e 9erJzLPg8YrN/aitu9QTbNuCFx03QknJo7WQ7uXPi2IkJdzZgutE++dG3tnJn9VQY1 d2LT83XXEIVIKCkAGehGz99rNTG/nFoQCCW4r0k1evEvEryTmi14NtAqLpCdba0qTq ILHBx2lQR85odgcxKdiKqSAyyFLSnqIyIeFqgktTHrlKiFBMaH/jMlcgvLq6uJhdU8 yvugs16/pz/CQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.98.2) (envelope-from ) id 1w65UE-00000006K4a-3O5l; Fri, 27 Mar 2026 11:36:22 +0000 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: Joey Gouly , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Fuad Tabba , Will Deacon , Quentin Perret Subject: [PATCH v2 03/30] KVM: arm64: Extract PFN resolution in user_mem_abort() Date: Fri, 27 Mar 2026 11:35:51 +0000 Message-ID: <20260327113618.4051534-4-maz@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260327113618.4051534-1-maz@kernel.org> References: <20260327113618.4051534-1-maz@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, oupton@kernel.org, yuzenghui@huawei.com, tabba@google.com, will@kernel.org, qperret@google.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Extract the section of code responsible for pinning the physical page frame number (PFN) backing the faulting IPA into a new helper, kvm_s2_fault_pin_pfn(). This helper encapsulates the critical section where the mmap_read_lock is held, the VMA is looked up, the mmu invalidate sequence is sampled, and the PFN is ultimately resolved via __kvm_faultin_pfn(). It also handles the early exits for hardware poisoned pages and noslot PFNs. By isolating this region, we can begin to organize the state variables required for PFN resolution into the kvm_s2_fault struct, clearing out a significant amount of local variable clutter from user_mem_abort(). Signed-off-by: Fuad Tabba Signed-off-by: Marc Zyngier --- arch/arm64/kvm/mmu.c | 105 ++++++++++++++++++++++++------------------- 1 file changed, 59 insertions(+), 46 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b366bde15a429..5079a58b65b14 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1740,55 +1740,11 @@ struct kvm_s2_fault { vm_flags_t vm_flags; }; -static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, - struct kvm_s2_trans *nested, - struct kvm_memory_slot *memslot, unsigned long hva, - bool fault_is_perm) +static int kvm_s2_fault_pin_pfn(struct kvm_s2_fault *fault) { - int ret = 0; - struct kvm_s2_fault fault_data = { - .vcpu = vcpu, - .fault_ipa = fault_ipa, - .nested = nested, - .memslot = memslot, - .hva = hva, - .fault_is_perm = fault_is_perm, - .ipa = fault_ipa, - .logging_active = memslot_is_logging(memslot), - .force_pte = memslot_is_logging(memslot), - .s2_force_noncacheable = false, - .vfio_allow_any_uc = false, - .prot = KVM_PGTABLE_PROT_R, - }; - struct kvm_s2_fault *fault = &fault_data; - struct kvm *kvm = vcpu->kvm; struct vm_area_struct *vma; - void *memcache; - struct kvm_pgtable *pgt; - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; - - if (fault->fault_is_perm) - fault->fault_granule = kvm_vcpu_trap_get_perm_fault_granule(fault->vcpu); - fault->write_fault = kvm_is_write_fault(fault->vcpu); - fault->exec_fault = kvm_vcpu_trap_is_exec_fault(fault->vcpu); - VM_WARN_ON_ONCE(fault->write_fault && fault->exec_fault); + struct kvm *kvm = fault->vcpu->kvm; - /* - * Permission faults just need to update the existing leaf entry, - * and so normally don't require allocations from the memcache. The - * only exception to this is when dirty logging is enabled at runtime - * and a write fault needs to collapse a block entry into a table. - */ - fault->topup_memcache = !fault->fault_is_perm || - (fault->logging_active && fault->write_fault); - ret = prepare_mmu_memcache(fault->vcpu, fault->topup_memcache, &memcache); - if (ret) - return ret; - - /* - * Let's check if we will get back a huge page backed by hugetlbfs, or - * get block mapping for device MMIO region. - */ mmap_read_lock(current->mm); vma = vma_lookup(current->mm, fault->hva); if (unlikely(!vma)) { @@ -1842,6 +1798,63 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (is_error_noslot_pfn(fault->pfn)) return -EFAULT; + return 1; +} + +static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *nested, + struct kvm_memory_slot *memslot, unsigned long hva, + bool fault_is_perm) +{ + int ret = 0; + struct kvm_s2_fault fault_data = { + .vcpu = vcpu, + .fault_ipa = fault_ipa, + .nested = nested, + .memslot = memslot, + .hva = hva, + .fault_is_perm = fault_is_perm, + .ipa = fault_ipa, + .logging_active = memslot_is_logging(memslot), + .force_pte = memslot_is_logging(memslot), + .s2_force_noncacheable = false, + .vfio_allow_any_uc = false, + .prot = KVM_PGTABLE_PROT_R, + }; + struct kvm_s2_fault *fault = &fault_data; + struct kvm *kvm = vcpu->kvm; + void *memcache; + struct kvm_pgtable *pgt; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; + + if (fault->fault_is_perm) + fault->fault_granule = kvm_vcpu_trap_get_perm_fault_granule(fault->vcpu); + fault->write_fault = kvm_is_write_fault(fault->vcpu); + fault->exec_fault = kvm_vcpu_trap_is_exec_fault(fault->vcpu); + VM_WARN_ON_ONCE(fault->write_fault && fault->exec_fault); + + /* + * Permission faults just need to update the existing leaf entry, + * and so normally don't require allocations from the memcache. The + * only exception to this is when dirty logging is enabled at runtime + * and a write fault needs to collapse a block entry into a table. + */ + fault->topup_memcache = !fault->fault_is_perm || + (fault->logging_active && fault->write_fault); + ret = prepare_mmu_memcache(fault->vcpu, fault->topup_memcache, &memcache); + if (ret) + return ret; + + /* + * Let's check if we will get back a huge page backed by hugetlbfs, or + * get block mapping for device MMIO region. + */ + ret = kvm_s2_fault_pin_pfn(fault); + if (ret != 1) + return ret; + + ret = 0; + /* * Check if this is non-struct page memory PFN, and cannot support * CMOs. It could potentially be unsafe to access as cacheable. -- 2.47.3