From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF4F3C87FCA for ; Tue, 29 Jul 2025 23:45:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Reply-To:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To: From:Subject:Message-ID:References:Mime-Version:In-Reply-To:Date: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YGSKpa2fHnKmvhwupUv+p380hnAGu6oGZqnkqfDR1aw=; b=12k+0NQz36igrQsdCPkKgJ+4ox dFQtZJ2ZQiD9HqAzFMZig6YMESKCA46MBV9eravIH5fZ4UkSJ4yxsKDbkXClJOhg+u1h/quUBWW1L atIJtO2PoBMoaWZ2qhJ266uU3FUQFVxTXfUI1ULQ1K3cTtYQQEla2QO4tG1h6axWg15UBMUkW1EHj /jI4GdoZfkEl/OeW7gnjgrYQjpHwCeVCfPOK+2tPAtzvCeWOP/DMYuR5JG/Q8KSLV3ecSNbTzKYNG q4t1a6J9bYThG4JgXaO9yeScQmqliXSifZJ+xj38g5h3XNP8XTHEwrFRPcs6ljtVF0m8i9qffi2Pm E8FzCPLw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1ugu0M-00000000KDG-2yVm; Tue, 29 Jul 2025 23:45:10 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1ugtEp-00000000Dua-3KoY for linux-arm-kernel@lists.infradead.org; Tue, 29 Jul 2025 22:56:04 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-23fd831def4so27590925ad.3 for ; Tue, 29 Jul 2025 15:56:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1753829763; x=1754434563; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=YGSKpa2fHnKmvhwupUv+p380hnAGu6oGZqnkqfDR1aw=; b=unxNTX0JNU8Quc8yZmcSyNBRmL/sUJ9fMPxPJscTA44yBLmF3+CH1JUjDFjDlZxWpF A1AayJQ7ysHoz2MVChorZW/x2UVywaC0nnEzWij0ALhPPDxMxg9vtRBXM9iAO1WVrjlf QqjSmiU/DRi12m/aH+lIxrtApGtsQAytOl5I+1nQejJQMUqI2dhk17SHvLS+p3TgMV7k onmgN9hMp+KRWmH1XeSIIwkfA/87uk6qx9EZgNRVa/Rnsq3P3llR5KmyQwiJpC3ty/NT AiAFvMFhWtGb/uskmgFY6O2YOcFMBRgAt2FJA2c0nEBqTp+mAU3j5KwSkNwjxzlOzui4 /XIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1753829763; x=1754434563; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=YGSKpa2fHnKmvhwupUv+p380hnAGu6oGZqnkqfDR1aw=; b=VAFxBadnmtFTDDVg57/Zyu2V/l1dWW+rV70MGZvIExSgdCknB++xVOeH42ynr82NdA cjHGRPbkDcwecDUGxtBvXgu6vuulf0KF9blxuT818jNCOvTJhzM4w5swiXemb90yuVGk vG6hB1uV3A5O+HwM5IBH01MauKv4x6i3CejmWUL+tXYUrvgATNU8hSRIP/x+JHKx6b3V M7E/8iia/4N0sDRHyZnIsJpi/unZhyGC6K1Mc6f90AhrvGUlFwfSuDuQ9RcNKWJepPlJ udkA78cTTGS+0VYXRVfrPZLk0vmrEy43hG2Sr0OSXOZLrxiOIso4f4u2O+REYo+sHBNR rIXw== X-Forwarded-Encrypted: i=1; AJvYcCXLH4iIt1ZCA30pBN4KBWfK0PxfvnEVtcRuWnuDEp903J+ClxzGNkC+OViXItknaZhWdUDH+fsgG+iIy5KpWhG4@lists.infradead.org X-Gm-Message-State: AOJu0YzQDtJB4yz9PJWi1kipont9aUyH0B3VuvzqcQaG/KTa2nt5wJ1j AQKh9IrwlPmJV6RRayV+RbKxX+cDN8ESZtxbJf0oTGVYWO0Hbxv/cYwhEiJtUuxyj6+9LB6Braq L7CEnyQ== X-Google-Smtp-Source: AGHT+IFvhUHa66DXw0WO0M1QOBisEEAf0dqbz4+FB+6ytlMqLm65KTleMot3mI0oJ4nV+lB86TE50xbpvCQ= X-Received: from pjk7.prod.google.com ([2002:a17:90b:5587:b0:31f:2a78:943]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:2301:b0:240:2ee6:fd45 with SMTP id d9443c01a7336-24096b410d1mr12990695ad.36.1753829762482; Tue, 29 Jul 2025 15:56:02 -0700 (PDT) Date: Tue, 29 Jul 2025 15:54:49 -0700 In-Reply-To: <20250729225455.670324-1-seanjc@google.com> Mime-Version: 1.0 References: <20250729225455.670324-1-seanjc@google.com> X-Mailer: git-send-email 2.50.1.552.g942d659e1b-goog Message-ID: <20250729225455.670324-19-seanjc@google.com> Subject: [PATCH v17 18/24] KVM: arm64: Handle guest_memfd-backed guest page faults From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Ira Weiny , Gavin Shan , Shivank Garg , Vlastimil Babka , Xiaoyao Li , David Hildenbrand , Fuad Tabba , Ackerley Tng , Tao Chan , James Houghton Content-Type: text/plain; charset="UTF-8" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250729_155603_839946_E1844782 X-CRM114-Status: GOOD ( 17.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Add arm64 architecture support for handling guest page faults on memory slots backed by guest_memfd. This change introduces a new function, gmem_abort(), which encapsulates the fault handling logic specific to guest_memfd-backed memory. The kvm_handle_guest_abort() entry point is updated to dispatch to gmem_abort() when a fault occurs on a guest_memfd-backed memory slot (as determined by kvm_slot_has_gmem()). Until guest_memfd gains support for huge pages, the fault granule for these memory regions is restricted to PAGE_SIZE. Reviewed-by: Gavin Shan Reviewed-by: James Houghton Reviewed-by: Marc Zyngier Signed-off-by: Fuad Tabba Signed-off-by: Sean Christopherson --- arch/arm64/kvm/mmu.c | 86 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 83 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b3eacb400fab..8c82df80a835 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1512,6 +1512,82 @@ static void adjust_nested_fault_perms(struct kvm_s2_trans *nested, *prot |= kvm_encode_nested_level(nested); } +#define KVM_PGTABLE_WALK_MEMABORT_FLAGS (KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED) + +static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *nested, + struct kvm_memory_slot *memslot, bool is_perm) +{ + bool write_fault, exec_fault, writable; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; + struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt; + unsigned long mmu_seq; + struct page *page; + struct kvm *kvm = vcpu->kvm; + void *memcache; + kvm_pfn_t pfn; + gfn_t gfn; + int ret; + + ret = prepare_mmu_memcache(vcpu, true, &memcache); + if (ret) + return ret; + + if (nested) + gfn = kvm_s2_trans_output(nested) >> PAGE_SHIFT; + else + gfn = fault_ipa >> PAGE_SHIFT; + + write_fault = kvm_is_write_fault(vcpu); + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); + + VM_WARN_ON_ONCE(write_fault && exec_fault); + + mmu_seq = kvm->mmu_invalidate_seq; + /* Pairs with the smp_wmb() in kvm_mmu_invalidate_end(). */ + smp_rmb(); + + ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL); + if (ret) { + kvm_prepare_memory_fault_exit(vcpu, fault_ipa, PAGE_SIZE, + write_fault, exec_fault, false); + return ret; + } + + writable = !(memslot->flags & KVM_MEM_READONLY); + + if (nested) + adjust_nested_fault_perms(nested, &prot, &writable); + + if (writable) + prot |= KVM_PGTABLE_PROT_W; + + if (exec_fault || + (cpus_have_final_cap(ARM64_HAS_CACHE_DIC) && + (!nested || kvm_s2_trans_executable(nested)))) + prot |= KVM_PGTABLE_PROT_X; + + kvm_fault_lock(kvm); + if (mmu_invalidate_retry(kvm, mmu_seq)) { + ret = -EAGAIN; + goto out_unlock; + } + + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, PAGE_SIZE, + __pfn_to_phys(pfn), prot, + memcache, flags); + +out_unlock: + kvm_release_faultin_page(kvm, page, !!ret, writable); + kvm_fault_unlock(kvm); + + if (writable && !ret) + mark_page_dirty_in_slot(kvm, memslot, gfn); + + return ret != -EAGAIN ? ret : 0; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, @@ -1536,7 +1612,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1961,8 +2037,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) VM_WARN_ON_ONCE(kvm_vcpu_trap_is_permission_fault(vcpu) && !write_fault && !kvm_vcpu_trap_is_exec_fault(vcpu)); - ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva, - esr_fsc_is_permission_fault(esr)); + if (kvm_slot_has_gmem(memslot)) + ret = gmem_abort(vcpu, fault_ipa, nested, memslot, + esr_fsc_is_permission_fault(esr)); + else + ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva, + esr_fsc_is_permission_fault(esr)); if (ret == 0) ret = 1; out: -- 2.50.1.552.g942d659e1b-goog