From: Xiaoyao Li <xiaoyao.li@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
David Hildenbrand <david@redhat.com>,
ackerleytng@google.com, seanjc@google.com
Cc: "Fuad Tabba" <tabba@google.com>,
"Vishal Annapurve" <vannapurve@google.com>,
rick.p.edgecombe@intel.com, "Kai Huang" <kai.huang@intel.com>,
binbin.wu@linux.intel.com, yan.y.zhao@intel.com,
ira.weiny@intel.com, michael.roth@amd.com, kvm@vger.kernel.org,
qemu-devel@nongnu.org, "Peter Xu" <peterx@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>
Subject: [POC PATCH 5/5] [HACK] memory: Don't enable in-place conversion for internal MemoryRegion with gmem
Date: Tue, 15 Jul 2025 11:31:41 +0800 [thread overview]
Message-ID: <20250715033141.517457-6-xiaoyao.li@intel.com> (raw)
In-Reply-To: <20250715033141.517457-1-xiaoyao.li@intel.com>
Currently, the TDVF cannot work with gmem in-place conversion because
current implementation of KVM_TDX_INIT_MEM_REGION in KVM requires
gmem of TDVF to be valid for both shared and private at the same time.
To workaround it, explicitly not enable in-place conversion for internal
MemoryRegion with gmem. So that TDVF doesn't use in-place conversion gmem
and KVM_TDX_INIT_MEM_REGION will initialize the gmem with the separate
shared memory.
To make in-place conversion work with TDX's initial memory, the
one possible solution and flow would be as below and it requires KVM
change:
- QEMU create gmem as shared;
- QEMU mmap the gmem and load TDVF binary into it;
- QEMU convert gmem to private with the content preserved[1];
- QEMU invokes KVM_TDX_INIT_MEM_REGION without valid src, so that KVM
knows to fetch the content in-place and use in-place PAGE.ADD for TDX.
[1] https://lore.kernel.org/all/aG0pNijVpl0czqXu@google.com/
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
include/system/memory.h | 3 +++
system/memory.c | 2 +-
system/physmem.c | 8 +++++---
3 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/include/system/memory.h b/include/system/memory.h
index f14fbf65805d..89d6449cef70 100644
--- a/include/system/memory.h
+++ b/include/system/memory.h
@@ -256,6 +256,9 @@ typedef struct IOMMUTLBEvent {
*/
#define RAM_PRIVATE (1 << 13)
+/* Don't use enable in-place conversion for the guest mmefd backend */
+#define RAM_GUEST_MEMFD_NO_INPLACE (1 << 14)
+
static inline void iommu_notifier_init(IOMMUNotifier *n, IOMMUNotify fn,
IOMMUNotifierFlag flags,
hwaddr start, hwaddr end,
diff --git a/system/memory.c b/system/memory.c
index 6870a41629ef..c1b73abc4c94 100644
--- a/system/memory.c
+++ b/system/memory.c
@@ -3702,7 +3702,7 @@ bool memory_region_init_ram_guest_memfd(MemoryRegion *mr,
DeviceState *owner_dev;
if (!memory_region_init_ram_flags_nomigrate(mr, owner, name, size,
- RAM_GUEST_MEMFD, errp)) {
+ RAM_GUEST_MEMFD | RAM_GUEST_MEMFD_NO_INPLACE, errp)) {
return false;
}
/* This will assert if owner is neither NULL nor a DeviceState.
diff --git a/system/physmem.c b/system/physmem.c
index ea1c27ea2b99..c23379082f38 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -1916,7 +1916,8 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
if (new_block->flags & RAM_GUEST_MEMFD) {
int ret;
- bool in_place = kvm_guest_memfd_inplace_supported;
+ bool in_place = !(new_block->flags & RAM_GUEST_MEMFD_NO_INPLACE) &&
+ kvm_guest_memfd_inplace_supported;
new_block->guest_memfd_flags = 0;
@@ -2230,7 +2231,8 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, ram_addr_t max_size,
ram_flags &= ~RAM_PRIVATE;
assert((ram_flags & ~(RAM_SHARED | RAM_RESIZEABLE | RAM_PREALLOC |
- RAM_NORESERVE | RAM_GUEST_MEMFD)) == 0);
+ RAM_NORESERVE | RAM_GUEST_MEMFD |
+ RAM_GUEST_MEMFD_NO_INPLACE)) == 0);
assert(!host ^ (ram_flags & RAM_PREALLOC));
assert(max_size >= size);
@@ -2314,7 +2316,7 @@ RAMBlock *qemu_ram_alloc(ram_addr_t size, uint32_t ram_flags,
MemoryRegion *mr, Error **errp)
{
assert((ram_flags & ~(RAM_SHARED | RAM_NORESERVE | RAM_GUEST_MEMFD |
- RAM_PRIVATE)) == 0);
+ RAM_PRIVATE | RAM_GUEST_MEMFD_NO_INPLACE)) == 0);
return qemu_ram_alloc_internal(size, size, NULL, NULL, ram_flags, mr, errp);
}
--
2.43.0
prev parent reply other threads:[~2025-07-15 3:40 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <cover.1747264138.git.ackerleytng@google.com>
2025-07-15 3:31 ` [POC PATCH 0/5] QEMU: Enable in-place conversion and hugetlb gmem Xiaoyao Li
2025-07-15 3:31 ` [POC PATCH 1/5] update-linux-headers: Add guestmem.h Xiaoyao Li
2025-07-15 3:31 ` [POC PATCH 2/5] headers: Fetch gmem updates Xiaoyao Li
2025-07-15 3:31 ` [POC PATCH 3/5] memory/guest_memfd: Enable in-place conversion when available Xiaoyao Li
2025-07-17 2:02 ` Chenyi Qiang
2025-08-01 2:33 ` Xiaoyao Li
2025-07-15 3:31 ` [POC PATCH 4/5] memory/guest_memfd: Enable hugetlb support Xiaoyao Li
2025-07-15 3:31 ` Xiaoyao Li [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250715033141.517457-6-xiaoyao.li@intel.com \
--to=xiaoyao.li@intel.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=david@redhat.com \
--cc=ira.weiny@intel.com \
--cc=kai.huang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=vannapurve@google.com \
--cc=yan.y.zhao@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).