From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F277D48BD43; Thu, 7 May 2026 20:22:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778185370; cv=none; b=SxEVfecIPA1akFOdYKPoAijAfFea5b2PWKWN49GZdNAKKVz6BnS4HbOZg2gfHtQTVo3sl22hdo/Ue9Yq8wA9GXLbB/5WwvcL79AsioOQbel3tK7UsOQfo5rih6I2AMbKDlToYMxO3CAf7FLEv/m+vwKJSDM66AyFO2VocrS7ZNI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778185370; c=relaxed/simple; bh=YIjXxMhCsMiab+p5+fdNvqRJdqFXniOH0P4Ki+yn8Ko=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=pUywkWyVAI49DRPShe4EiFLqIE72UCdbTKa83KObYqFYc7E2/BgBieaqvAHbed4AINxzAC/FKA9Kb6Cu/sYJmfw6XEqPsfc9fAQYYtCSi6EsTVpfT0WWTR4xjOcv6TNDgjsRKWOStXS4kltfh+Nv2LYcSivesMtoZNwbjfLJmRM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=imD9dFCK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="imD9dFCK" Received: by smtp.kernel.org (Postfix) with ESMTPS id D7851C2BCFC; Thu, 7 May 2026 20:22:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778185369; bh=YIjXxMhCsMiab+p5+fdNvqRJdqFXniOH0P4Ki+yn8Ko=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=imD9dFCKj0jueVzRENtSGvsIBMrsR7vWunixwRrXICI2WaDD3s7HHTyf4F6rS73ib uLOOlgI0nkMLZ+vyNZiAuBI8PjseOOyTI5kKxSk9qcDA6PJFrQDdNMKHGd8AeGlT8n IJ/lmohds/NLp4CsRD5aK0j1UnVInmSO2zesybO2T7JLUNbcxwthuQIP/oQuR5H+V0 qzQN4u8xmqAcaxZR0mlK5O/Vqpb044c6+nfMuryCHzvpdHCcc1c8K0ZH/AMT5SdSOA hQweztrSkzj3jwImtpIt9Z0mKQZ3j3IMqKAuBPkBcNCsOI0ETFIuQog+LXnkAnipyn rbPcSJ1wgdq2g== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB752CD343F; Thu, 7 May 2026 20:22:49 +0000 (UTC) From: Ackerley Tng via B4 Relay Date: Thu, 07 May 2026 13:22:31 -0700 Subject: [PATCH v6 12/43] KVM: guest_memfd: Call arch invalidate hooks on conversion Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260507-gmem-inplace-conversion-v6-12-91ab5a8b19a4@google.com> References: <20260507-gmem-inplace-conversion-v6-0-91ab5a8b19a4@google.com> In-Reply-To: <20260507-gmem-inplace-conversion-v6-0-91ab5a8b19a4@google.com> To: aik@amd.com, andrew.jones@linux.dev, binbin.wu@linux.intel.com, brauner@kernel.org, chao.p.peng@linux.intel.com, david@kernel.org, ira.weiny@intel.com, jmattson@google.com, jthoughton@google.com, michael.roth@amd.com, oupton@kernel.org, pankaj.gupta@amd.com, qperret@google.com, rick.p.edgecombe@intel.com, rientjes@google.com, shivankg@amd.com, steven.price@arm.com, tabba@google.com, willy@infradead.org, wyihan@google.com, yan.y.zhao@intel.com, forkloop@google.com, pratyush@kernel.org, suzuki.poulose@arm.com, aneesh.kumar@kernel.org, liam@infradead.org, Paolo Bonzini , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jonathan Corbet , Shuah Khan , Shuah Khan , Vishal Annapurve , Andrew Morton , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Youngjun Park , Qi Zheng , Shakeel Butt , Kiryl Shutsemau , Jason Gunthorpe , Vlastimil Babka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, Ackerley Tng X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1778185365; l=2720; i=ackerleytng@google.com; s=20260225; h=from:subject:message-id; bh=84iyLX6+LIfUFaSHBjCmDq8DIdOADa//Czwizn51p/0=; b=AJQOXT3mB7TF+tmJLeAyEkXf/3izyNdFZJWpsWvgwF/wp0ylxmJQQaWfnFD0Q7Bm/JzZcuJ9i xIWlm2plnhaC+HIBf7laVbW/CiDMyK1Lrh85mSlgzBeNIgQH6KcBdHX X-Developer-Key: i=ackerleytng@google.com; a=ed25519; pk=sAZDYXdm6Iz8FHitpHeFlCMXwabodTm7p8/3/8xUxuU= X-Endpoint-Received: by B4 Relay for ackerleytng@google.com/20260225 with auth_id=649 X-Original-From: Ackerley Tng Reply-To: ackerleytng@google.com From: Ackerley Tng When memory in guest_memfd is converted from private to shared, the platform-specific state associated with the guest-private pages must be invalidated or cleaned up. Iterate over the folios in the affected range and call the kvm_arch_gmem_invalidate() hook for each PFN range. This allows architectures to perform necessary teardown, such as updating hardware metadata or encryption states, before the pages are transitioned to the shared state. Invoke this helper after indicating to KVM's mmu code that an invalidation is in progress to stop in-flight page faults from succeeding. Signed-off-by: Ackerley Tng --- virt/kvm/guest_memfd.c | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 9d82642a025e9..baf4b88dead1f 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -603,6 +603,42 @@ static bool kvm_gmem_is_safe_for_conversion(struct inode *inode, pgoff_t start, return safe; } +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE +static void kvm_gmem_invalidate(struct inode *inode, pgoff_t start, pgoff_t end) +{ + struct folio_batch fbatch; + pgoff_t next = start; + int i; + + folio_batch_init(&fbatch); + while (filemap_get_folios(inode->i_mapping, &next, end - 1, &fbatch)) { + for (i = 0; i < folio_batch_count(&fbatch); ++i) { + struct folio *folio = fbatch.folios[i]; + pgoff_t start_index, end_index; + kvm_pfn_t start_pfn, end_pfn; + + start_index = max(start, folio->index); + end_index = min(end, folio_next_index(folio)); + /* + * end_index is either in folio or points to + * the first page of the next folio. Hence, + * all pages in range [start_index, end_index) + * are contiguous. + */ + start_pfn = folio_file_pfn(folio, start_index); + end_pfn = start_pfn + end_index - start_index; + + kvm_arch_gmem_invalidate(start_pfn, end_pfn); + } + + folio_batch_release(&fbatch); + cond_resched(); + } +} +#else +static void kvm_gmem_invalidate(struct inode *inode, pgoff_t start, pgoff_t end) {} +#endif + static int __kvm_gmem_set_attributes(struct inode *inode, pgoff_t start, size_t nr_pages, uint64_t attrs, pgoff_t *err_index) @@ -643,7 +679,12 @@ static int __kvm_gmem_set_attributes(struct inode *inode, pgoff_t start, */ kvm_gmem_invalidate_begin(inode, start, end); + + if (!to_private) + kvm_gmem_invalidate(inode, start, end); + mas_store_prealloc(&mas, xa_mk_value(attrs)); + kvm_gmem_invalidate_end(inode, start, end); out: filemap_invalidate_unlock(mapping); -- 2.54.0.563.g4f69b47b94-goog