From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C79F395276; Tue, 28 Apr 2026 23:25:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777418719; cv=none; b=Sz3+kzrijbvh4niyne0a2mOvHUqx/CEdPQgmsY8stTwvF/b1e8J4mSfl9WhZ7YLtMy09n3DLFoZcnq+Bsv81ZvEYKXn+aOFf/Kke5erFPwX+Tf8ws2QigmLiMRCJmyCHmWNdguSX4UgOJsySVEwHuNXMIsl3asQosSbzRzxBme0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777418719; c=relaxed/simple; bh=O5EusdRtRRcBsbLXs41Iht6PiU+kaHavayRHgykcm70=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=iD1AOchk1KyylMPn+BT/+HVPfEIOobd5DjZO2ks6eksYz9nAJlubUSIbFOb5TOsktRUQLGthGdltbombPFe2tisU3ZQaXo8Pm7WeIImFx6A1yTHN6YbTaM+Q9+6/TPnhyZXjPpo71f8ATlbBUNTEnNXhwmitYqdE2DL/WFWlrL0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ILyLZIk7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ILyLZIk7" Received: by smtp.kernel.org (Postfix) with ESMTPS id 2D89AC2BCAF; Tue, 28 Apr 2026 23:25:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777418719; bh=O5EusdRtRRcBsbLXs41Iht6PiU+kaHavayRHgykcm70=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=ILyLZIk7mLcRdFizYiNm5+SQt8Rlc0tM/h5WbkdME+trzC0BLp+ecwawGTzp7X8II DyOCuQHsDjEiKr3ElyRdA9StEXtC2u0lcl+vBpq3qruQkTKF7ILo2PWH1qrxFuEUqR labSI41SqA0y78nZAvZwTa4v1OSrnwqtTBEQ5gDvBOaMyjPGtUXHfk6RW10uLHEtdv hMw93TrV2EjkWkIO0aa0v41/BfUU4ut4DUNRHwSOdUvcdlh6VBc1teBRCTY2afGGDV UI0k7eDX5mu96a/iXr8rn5vo49LfAMB3Z+Q4qdLdrFaN6X+UpYGIFuVd0u2ubwumMc WxYFtlz+uRxhA== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23281FF887B; Tue, 28 Apr 2026 23:25:19 +0000 (UTC) From: Ackerley Tng via B4 Relay Date: Tue, 28 Apr 2026 16:25:18 -0700 Subject: [PATCH RFC v5 23/53] KVM: x86: Support SW_PROTECTED_VM in applying content modes Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260428-gmem-inplace-conversion-v5-23-d8608ccfca22@google.com> References: <20260428-gmem-inplace-conversion-v5-0-d8608ccfca22@google.com> In-Reply-To: <20260428-gmem-inplace-conversion-v5-0-d8608ccfca22@google.com> To: aik@amd.com, andrew.jones@linux.dev, binbin.wu@linux.intel.com, brauner@kernel.org, chao.p.peng@linux.intel.com, david@kernel.org, ira.weiny@intel.com, jmattson@google.com, jthoughton@google.com, michael.roth@amd.com, oupton@kernel.org, pankaj.gupta@amd.com, qperret@google.com, rick.p.edgecombe@intel.com, rientjes@google.com, shivankg@amd.com, steven.price@arm.com, tabba@google.com, willy@infradead.org, wyihan@google.com, yan.y.zhao@intel.com, forkloop@google.com, pratyush@kernel.org, suzuki.poulose@arm.com, aneesh.kumar@kernel.org, Paolo Bonzini , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jonathan Corbet , Shuah Khan , Shuah Khan , Vishal Annapurve , Andrew Morton , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Youngjun Park , Qi Zheng , Shakeel Butt , Kiryl Shutsemau , Jason Gunthorpe , Vlastimil Babka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, Ackerley Tng X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1777418714; l=3447; i=ackerleytng@google.com; s=20260225; h=from:subject:message-id; bh=9ksW1lIO9ZmZXIT5xvo0JGaOScZO9eFAx8q/FWxGah8=; b=Cy4xUfQRFEwNJDgZS17EsIHexFRRfCGZCJzHnrfM2HPN/NoecocgbvmuMuwK8/LWmqP1BNxCk G0AVsZtj4TUBD2z0PWLopI3VJCmZQUh+/Miluf8cH7GpfDGSe8Qxxx0 X-Developer-Key: i=ackerleytng@google.com; a=ed25519; pk=sAZDYXdm6Iz8FHitpHeFlCMXwabodTm7p8/3/8xUxuU= X-Endpoint-Received: by B4 Relay for ackerleytng@google.com/20260225 with auth_id=649 X-Original-From: Ackerley Tng Reply-To: ackerleytng@google.com From: Ackerley Tng Override the architecture-specific guest_memfd content mode functions for x86 to provide support for KVM_X86_SW_PROTECTED_VM. For software-protected VMs, specify KVM_SET_MEMORY_ATTRIBUTES2_ZERO and KVM_SET_MEMORY_ATTRIBUTES2_PRESERVE as supported content modes. Implement the logic for these modes as follows: + ZERO: Zero out the memory using the generic guest_memfd helper. + PRESERVE: Maintain the existing memory content without modification. + UNSPECIFIED: KVM_X86_SW_PROTECTED_VM is guest_memfd's testing vehicle. Scramble the memory range by filling it with random bytes so test behavior will differ from that of PRESERVE. Signed-off-by: Ackerley Tng --- arch/x86/kvm/x86.c | 93 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 93 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6609957ecfea3..e8abff71001eb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -14194,6 +14194,99 @@ void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) kvm_x86_call(gmem_invalidate)(start, end); } #endif + +u64 kvm_arch_gmem_supported_content_modes(struct kvm *kvm, bool to_private) +{ + if (!kvm) { + return KVM_SET_MEMORY_ATTRIBUTES2_ZERO | + KVM_SET_MEMORY_ATTRIBUTES2_PRESERVE; + } + + switch (kvm->arch.vm_type) { + case KVM_X86_SW_PROTECTED_VM: + return KVM_SET_MEMORY_ATTRIBUTES2_ZERO | + KVM_SET_MEMORY_ATTRIBUTES2_PRESERVE; + default: + return 0; + } +} + +int kvm_arch_gmem_apply_content_mode_zero(struct kvm *kvm, struct inode *inode, + pgoff_t start, pgoff_t end) +{ + switch (kvm->arch.vm_type) { + case KVM_X86_SW_PROTECTED_VM: + return kvm_gmem_apply_content_mode_zero(inode, start, end); + default: + return 0; + } +} + +int kvm_arch_gmem_apply_content_mode_preserve(struct kvm *kvm, + struct inode *inode, + pgoff_t start, pgoff_t end) +{ + switch (kvm->arch.vm_type) { + case KVM_X86_SW_PROTECTED_VM: + /* Do nothing to preserve content. */ + return 0; + default: + /* Not a valid content mode for other types, so do nothing. */ + return 0; + } +} + +static int __scramble_range(struct inode *inode, pgoff_t start, pgoff_t end) +{ + struct address_space *mapping = inode->i_mapping; + struct folio_batch fbatch; + struct folio *f; + char *kaddr; + int ret = 0; + int i; + + folio_batch_init(&fbatch); + while (!ret && filemap_get_folios(mapping, &start, end - 1, &fbatch)) { + for (i = 0; !ret && i < folio_batch_count(&fbatch); ++i) { + f = fbatch.folios[i]; + + folio_lock(f); + + if (folio_test_hwpoison(f)) { + ret = -EHWPOISON; + } else { + /* + * Hard-coding range to scramble since + * guest_memfd only supports PAGE_SIZE + * folios now. + */ + kaddr = kmap_local_folio(f, 0); + get_random_bytes(kaddr, PAGE_SIZE); + kunmap_local(kaddr); + } + + folio_unlock(f); + } + + folio_batch_release(&fbatch); + cond_resched(); + } + + return ret; +} + +int kvm_arch_gmem_apply_content_mode_unspecified(struct kvm *kvm, + struct inode *inode, + pgoff_t start, pgoff_t end) +{ + switch (kvm->arch.vm_type) { + case KVM_X86_SW_PROTECTED_VM: + return __scramble_range(inode, start, end); + default: + return 0; + } +} + #endif int kvm_spec_ctrl_test_value(u64 value) -- 2.54.0.545.g6539524ca2-goog