From: Yan Zhao <yan.y.zhao@intel.com>
To: pbonzini@redhat.com, seanjc@google.com
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
x86@kernel.org, rick.p.edgecombe@intel.com,
dave.hansen@intel.com, kirill.shutemov@intel.com,
tabba@google.com, ackerleytng@google.com,
quic_eberman@quicinc.com, michael.roth@amd.com, david@redhat.com,
vannapurve@google.com, vbabka@suse.cz, jroedel@suse.de,
thomas.lendacky@amd.com, pgonda@google.com,
zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com,
ira.weiny@intel.com, isaku.yamahata@intel.com,
xiaoyao.li@intel.com, binbin.wu@linux.intel.com,
chao.p.peng@intel.com, Yan Zhao <yan.y.zhao@intel.com>
Subject: [RFC PATCH 01/21] KVM: gmem: Allocate 2M huge page from guest_memfd backend
Date: Thu, 24 Apr 2025 11:04:06 +0800 [thread overview]
Message-ID: <20250424030406.32670-1-yan.y.zhao@intel.com> (raw)
In-Reply-To: <20250424030033.32635-1-yan.y.zhao@intel.com>
Allocate 2M huge pages from the guest_memfd's filemap when the max_order is
greater than or equal to PMD_ORDER.
Introduce a helper function, kvm_gmem_get_max_order(), to assist
kvm_gmem_populate() and kvm_gmem_get_pfn() in obtaining a max_order, based
on the alignment of GFN/index, range size and the consistency of page
attributes.
Pass in the max_order to __kvm_gmem_get_pfn(), which invokes
kvm_gmem_get_folio() to allocate a 2M huge page from gmem filemap if the
max_order is >= PMD_ORDER. __kvm_gmem_get_pfn() then updates the max_order
if the order of the allocated page is smaller than the requestd order.
Note:!!
This patch just serves as an glue layer on top of Michael Roth's series[1],
showing TDX's basic assumptions to the guest_memfd, i.e..
guest_memfd allocates private huge pages whenever alignment of GFN/index,
range size and the consistency of page attributes allow it.
As Dave mentioned at [2],
"Probably a good idea to focus on the long-term use case where we
have in-place conversion support, and only allow truncation in hugepage
(e.g., 2 MiB) size; conversion shared<->private could still be done on 4
KiB granularity as for hugetlb.",
"In general, I think our time is better spent
working on the real deal than on interim solutions that should not be
called "THP support",
Please don't spend much time on reviewing this patch, as it would propably
be gone or appear in another form after guest_memfd's solution based on
hugetlb for in-place-conversion is available.
Link: https://lore.kernel.org/all/20241212063635.712877-1-michael.roth@amd.com [1]
Link: https://lore.kernel.org/all/7c86c45c-17e4-4e9b-8d80-44fdfd37f38b@redhat.com [2]
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
virt/kvm/guest_memfd.c | 153 +++++++++++++++--------------------------
1 file changed, 56 insertions(+), 97 deletions(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 5cd3b66063dc..4bb140e7f30d 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -265,36 +265,6 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, struct file *file,
return r;
}
-static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index,
- unsigned int order)
-{
- pgoff_t npages = 1UL << order;
- pgoff_t huge_index = round_down(index, npages);
- struct address_space *mapping = inode->i_mapping;
- gfp_t gfp = mapping_gfp_mask(mapping) | __GFP_NOWARN;
- loff_t size = i_size_read(inode);
- struct folio *folio;
-
- /* Make sure hugepages would be fully-contained by inode */
- if ((huge_index + npages) * PAGE_SIZE > size)
- return NULL;
-
- if (filemap_range_has_page(mapping, (loff_t)huge_index << PAGE_SHIFT,
- (loff_t)(huge_index + npages - 1) << PAGE_SHIFT))
- return NULL;
-
- folio = filemap_alloc_folio(gfp, order);
- if (!folio)
- return NULL;
-
- if (filemap_add_folio(mapping, folio, huge_index, gfp)) {
- folio_put(folio);
- return NULL;
- }
-
- return folio;
-}
-
/*
* Returns a locked folio on success. The caller is responsible for
* setting the up-to-date flag before the memory is mapped into the guest.
@@ -304,14 +274,19 @@ static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index,
* Ignore accessed, referenced, and dirty flags. The memory is
* unevictable and there is no storage to write back to.
*/
-static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
+static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, int max_order)
{
struct folio *folio = NULL;
- if (gmem_2m_enabled)
- folio = kvm_gmem_get_huge_folio(inode, index, PMD_ORDER);
+ if (max_order >= PMD_ORDER) {
+ fgf_t fgp_flags = FGP_LOCK | FGP_ACCESSED | FGP_CREAT;
- if (!folio)
+ fgp_flags |= fgf_set_order(1U << (PAGE_SHIFT + PMD_ORDER));
+ folio = __filemap_get_folio(inode->i_mapping, index, fgp_flags,
+ mapping_gfp_mask(inode->i_mapping));
+ }
+
+ if (!folio || IS_ERR(folio))
folio = filemap_grab_folio(inode->i_mapping, index);
return folio;
@@ -402,49 +377,11 @@ static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len)
static long kvm_gmem_allocate(struct inode *inode, loff_t offset, loff_t len)
{
- struct address_space *mapping = inode->i_mapping;
- pgoff_t start, index, end;
- int r;
-
- /* Dedicated guest is immutable by default. */
- if (offset + len > i_size_read(inode))
- return -EINVAL;
-
- filemap_invalidate_lock_shared(mapping);
-
- start = offset >> PAGE_SHIFT;
- end = (offset + len) >> PAGE_SHIFT;
-
- r = 0;
- for (index = start; index < end; ) {
- struct folio *folio;
-
- if (signal_pending(current)) {
- r = -EINTR;
- break;
- }
-
- folio = kvm_gmem_get_folio(inode, index);
- if (IS_ERR(folio)) {
- r = PTR_ERR(folio);
- break;
- }
-
- index = folio_next_index(folio);
-
- folio_unlock(folio);
- folio_put(folio);
-
- /* 64-bit only, wrapping the index should be impossible. */
- if (WARN_ON_ONCE(!index))
- break;
-
- cond_resched();
- }
-
- filemap_invalidate_unlock_shared(mapping);
-
- return r;
+ /*
+ * Skip supporting allocate for now. This can be added easiler after
+ * __kvm_gmem_get_pfn() is settled down.
+ */
+ return -EOPNOTSUPP;
}
static long kvm_gmem_fallocate(struct file *file, int mode, loff_t offset,
@@ -853,7 +790,7 @@ static struct folio *__kvm_gmem_get_pfn(struct file *file,
huge_index + (1ull << *max_order) > slot->gmem.pgoff + slot->npages)
*max_order = 0;
- folio = kvm_gmem_get_folio(file_inode(file), index);
+ folio = kvm_gmem_get_folio(file_inode(file), index, *max_order);
if (IS_ERR(folio))
return folio;
@@ -869,6 +806,40 @@ static struct folio *__kvm_gmem_get_pfn(struct file *file,
return folio;
}
+static int kvm_gmem_get_max_order(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, pgoff_t index, long npages, int *max_order)
+{
+ int ret = 0;
+ int order = 0;
+ /*
+ * The max order shouldn't extend beyond the GFN range being
+ * populated in this iteration, so set max_order accordingly.
+ * __kvm_gmem_get_pfn() will then further adjust the order to
+ * one that is contained by the backing memslot/folio.
+ */
+ order = 0;
+
+ while (IS_ALIGNED(gfn, 1 << (order + 1)) && (npages >= (1 << (order + 1))))
+ order++;
+
+ order = min(order, PMD_ORDER);
+
+ while (!kvm_range_has_memory_attributes(kvm, gfn, gfn + (1 << order),
+ KVM_MEMORY_ATTRIBUTE_PRIVATE,
+ KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
+ if (!order) {
+ ret = -ENOENT;
+ return ret;
+ }
+ order--;
+ }
+
+ WARN_ON(!IS_ALIGNED(index, 1 << order) || (npages < (1 << order)));
+
+ *max_order = order;
+ return ret;
+}
+
int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
int *max_order)
@@ -882,15 +853,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
if (!file)
return -EFAULT;
- /*
- * The caller might pass a NULL 'max_order', but internally this
- * function needs to be aware of any order limitations set by
- * __kvm_gmem_get_pfn() so the scope of preparation operations can
- * be limited to the corresponding range. The initial order can be
- * arbitrarily large, but gmem doesn't currently support anything
- * greater than PMD_ORDER so use that for now.
- */
- max_order_local = PMD_ORDER;
+ kvm_gmem_get_max_order(kvm, slot, gfn, index, slot->npages - (gfn - slot->base_gfn),
+ &max_order_local);
folio = __kvm_gmem_get_pfn(file, slot, index, pfn, &max_order_local);
if (IS_ERR(folio)) {
@@ -953,6 +917,11 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
break;
}
+ ret = kvm_gmem_get_max_order(kvm, slot, gfn, kvm_gmem_get_index(slot, gfn),
+ npages - i, &max_order);
+ if (ret)
+ break;
+
folio = __kvm_gmem_get_pfn(file, slot, index, &pfn, &max_order);
if (IS_ERR(folio)) {
ret = PTR_ERR(folio);
@@ -967,17 +936,8 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
}
folio_unlock(folio);
- WARN_ON(!IS_ALIGNED(gfn, 1 << max_order) ||
- (npages - i) < (1 << max_order));
ret = -EINVAL;
- while (!kvm_range_has_memory_attributes(kvm, gfn, gfn + (1 << max_order),
- KVM_MEMORY_ATTRIBUTE_PRIVATE,
- KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
- if (!max_order)
- goto put_folio_and_exit;
- max_order--;
- }
p = src ? src + i * PAGE_SIZE : NULL;
ret = post_populate(kvm, gfn, pfn, p, max_order, opaque);
@@ -986,7 +946,6 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
kvm_gmem_mark_prepared(file, index, max_order);
}
-put_folio_and_exit:
folio_put(folio);
if (ret)
break;
--
2.43.2
next prev parent reply other threads:[~2025-04-24 3:06 UTC|newest]
Thread overview: 294+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-24 3:00 [RFC PATCH 00/21] KVM: TDX huge page support for private memory Yan Zhao
2025-04-24 3:04 ` Yan Zhao [this message]
2025-04-24 3:04 ` [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-04-24 7:48 ` Kirill A. Shutemov
2025-04-24 8:41 ` Yan Zhao
2025-04-25 6:51 ` Binbin Wu
2025-04-25 7:19 ` Yan Zhao
2025-05-13 18:52 ` Edgecombe, Rick P
2025-05-16 9:05 ` Yan Zhao
2025-05-16 17:10 ` Edgecombe, Rick P
2025-06-19 9:26 ` Nikolay Borisov
2025-06-23 9:32 ` Yan Zhao
2025-05-15 2:16 ` Chao Gao
2025-05-16 9:07 ` Yan Zhao
2025-07-08 8:48 ` Yan Zhao
2025-07-08 13:55 ` Edgecombe, Rick P
2025-07-08 15:29 ` Vishal Annapurve
2025-07-08 15:32 ` Edgecombe, Rick P
2025-07-08 22:06 ` Vishal Annapurve
2025-07-08 23:16 ` Edgecombe, Rick P
2025-07-08 23:31 ` Vishal Annapurve
2025-07-09 2:23 ` Yan Zhao
2025-07-09 14:08 ` Edgecombe, Rick P
2025-04-24 3:04 ` [RFC PATCH 03/21] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-04-25 7:12 ` Binbin Wu
2025-04-25 7:17 ` Yan Zhao
2025-04-25 7:25 ` Binbin Wu
2025-04-25 9:24 ` Yan Zhao
2025-05-13 18:19 ` Edgecombe, Rick P
2025-05-15 8:26 ` Yan Zhao
2025-05-15 17:28 ` Edgecombe, Rick P
2025-05-16 2:23 ` Yan Zhao
2025-07-01 21:15 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 04/21] KVM: TDX: Enforce 4KB mapping level during TD build Time Yan Zhao
2025-04-24 7:55 ` Kirill A. Shutemov
2025-04-24 8:49 ` Yan Zhao
2025-05-13 19:12 ` Edgecombe, Rick P
2025-05-15 9:16 ` Yan Zhao
2025-05-15 17:32 ` Edgecombe, Rick P
2025-05-16 10:05 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 05/21] KVM: TDX: Enhance tdx_clear_page() to support huge pages Yan Zhao
2025-05-13 19:17 ` Edgecombe, Rick P
2025-05-16 2:02 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 06/21] KVM: TDX: Assert the reclaimed pages were mapped as expected Yan Zhao
2025-05-13 19:25 ` Edgecombe, Rick P
2025-05-16 2:11 ` Yan Zhao
2025-05-16 17:34 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 07/21] KVM: TDX: Add a helper for WBINVD on huge pages with TD's keyID Yan Zhao
2025-05-06 8:37 ` Binbin Wu
2025-05-16 3:10 ` Yan Zhao
2025-05-13 19:29 ` Edgecombe, Rick P
2025-05-16 3:03 ` Yan Zhao
2025-05-16 17:35 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages Yan Zhao
2025-04-29 0:17 ` Vishal Annapurve
2025-04-29 0:49 ` Yan Zhao
2025-04-29 13:46 ` Vishal Annapurve
2025-05-06 0:53 ` Yan Zhao
2025-05-06 5:08 ` Vishal Annapurve
2025-05-06 6:04 ` Yan Zhao
2025-05-06 13:18 ` Vishal Annapurve
2025-05-07 7:37 ` Yan Zhao
2025-05-07 14:56 ` Vishal Annapurve
2025-05-08 1:30 ` Yan Zhao
2025-05-08 14:10 ` Vishal Annapurve
2025-05-09 3:20 ` Yan Zhao
2025-05-09 14:20 ` Vishal Annapurve
2025-05-09 23:45 ` Edgecombe, Rick P
2025-05-10 0:41 ` Vishal Annapurve
2025-05-12 21:59 ` Edgecombe, Rick P
2025-05-12 2:15 ` Yan Zhao
2025-05-12 16:53 ` Vishal Annapurve
2025-05-15 3:01 ` Yan Zhao
2025-06-04 20:02 ` Ackerley Tng
2025-06-05 2:42 ` Yan Zhao
2025-06-05 21:12 ` Ackerley Tng
2025-06-16 10:43 ` Yan Zhao
2025-06-16 23:27 ` Edgecombe, Rick P
2025-06-11 14:30 ` Vishal Annapurve
2025-06-16 9:59 ` Yan Zhao
2025-06-17 0:12 ` Edgecombe, Rick P
2025-06-17 1:38 ` Yan Zhao
2025-06-17 15:52 ` Edgecombe, Rick P
2025-06-18 0:19 ` Yan Zhao
2025-06-18 0:41 ` Edgecombe, Rick P
2025-06-23 9:27 ` Yan Zhao
2025-06-23 18:20 ` Edgecombe, Rick P
[not found] ` <draft-diqzh606mcz0.fsf@ackerleytng-ctop.c.googlers.com>
2025-06-23 22:48 ` Ackerley Tng
2025-06-24 10:18 ` Yan Zhao
2025-06-24 21:29 ` Ackerley Tng
2025-06-24 22:22 ` Edgecombe, Rick P
2025-06-24 22:00 ` Edgecombe, Rick P
2025-06-24 22:14 ` Edgecombe, Rick P
2025-06-24 23:30 ` Ackerley Tng
2025-06-25 0:01 ` Edgecombe, Rick P
2025-06-25 7:29 ` Yan Zhao
2025-06-25 23:09 ` Ackerley Tng
2025-06-25 23:19 ` Edgecombe, Rick P
2025-06-26 15:16 ` Shutemov, Kirill
2025-06-26 22:19 ` Edgecombe, Rick P
2025-06-27 17:59 ` Ackerley Tng
2025-06-30 11:13 ` Yan Zhao
2025-06-30 17:55 ` Edgecombe, Rick P
2025-06-30 19:25 ` Ackerley Tng
2025-06-30 21:45 ` Edgecombe, Rick P
2025-07-01 5:01 ` Yan Zhao
2025-07-01 5:22 ` Vishal Annapurve
2025-07-01 6:03 ` Yan Zhao
2025-07-01 7:13 ` Vishal Annapurve
2025-07-01 14:15 ` Edgecombe, Rick P
2025-07-01 22:09 ` Ackerley Tng
2025-07-02 11:24 ` Yan Zhao
2025-07-02 18:43 ` Ackerley Tng
2025-07-03 4:54 ` Yan Zhao
2025-07-14 19:32 ` Ackerley Tng
2025-07-01 16:13 ` Edgecombe, Rick P
2025-07-01 21:48 ` Ackerley Tng
2025-07-01 21:57 ` Ackerley Tng
2025-07-01 22:37 ` Edgecombe, Rick P
2025-07-02 20:57 ` Ackerley Tng
2025-07-02 23:51 ` Edgecombe, Rick P
2025-07-08 21:19 ` Ackerley Tng
2025-07-11 1:46 ` Edgecombe, Rick P
2025-07-11 5:12 ` Yan Zhao
2025-07-11 16:14 ` Edgecombe, Rick P
2025-07-14 19:49 ` Ackerley Tng
2025-07-15 15:08 ` Edgecombe, Rick P
2025-07-15 22:31 ` Ackerley Tng
2025-07-02 9:08 ` Yan Zhao
2025-07-02 15:28 ` Edgecombe, Rick P
2025-07-01 5:07 ` Yan Zhao
2025-07-01 22:01 ` Ackerley Tng
2025-07-01 22:26 ` Ackerley Tng
2025-06-30 21:47 ` Vishal Annapurve
2025-07-01 9:35 ` Yan Zhao
2025-07-01 13:32 ` Vishal Annapurve
2025-07-01 14:02 ` Vishal Annapurve
2025-07-01 15:42 ` Edgecombe, Rick P
2025-07-01 16:14 ` Edgecombe, Rick P
2025-07-02 8:54 ` Yan Zhao
2025-07-02 13:12 ` Vishal Annapurve
2025-06-25 7:08 ` Yan Zhao
2025-06-25 22:54 ` Ackerley Tng
2025-06-24 22:03 ` Edgecombe, Rick P
2025-06-17 0:25 ` Edgecombe, Rick P
2025-06-17 2:00 ` Yan Zhao
2025-06-17 3:51 ` Vishal Annapurve
2025-06-17 6:52 ` Yan Zhao
2025-06-17 8:09 ` Vishal Annapurve
2025-06-17 9:57 ` Yan Zhao
2025-06-18 4:25 ` Vishal Annapurve
2025-06-18 0:34 ` Edgecombe, Rick P
2025-06-18 0:46 ` Yan Zhao
2025-06-18 4:33 ` Vishal Annapurve
2025-06-18 6:13 ` Yan Zhao
2025-06-18 6:21 ` Vishal Annapurve
2025-06-18 6:32 ` Yan Zhao
2025-06-18 6:44 ` Vishal Annapurve
2025-06-18 6:57 ` Yan Zhao
2025-06-18 4:29 ` Vishal Annapurve
2025-06-19 0:22 ` Edgecombe, Rick P
2025-06-05 2:47 ` Yan Zhao
2025-06-05 22:35 ` Ackerley Tng
2025-06-19 8:11 ` Yan Zhao
2025-06-20 18:06 ` Vishal Annapurve
2025-07-16 1:23 ` Yan Zhao
2025-07-16 20:57 ` Ackerley Tng
2025-07-18 5:49 ` Yan Zhao
2025-07-22 5:33 ` Ackerley Tng
2025-07-22 6:37 ` Yan Zhao
2025-07-22 17:55 ` Ackerley Tng
2025-05-12 19:00 ` Ackerley Tng
2025-05-12 21:44 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 09/21] KVM: TDX: Enable 2MB mapping size after TD is RUNNABLE Yan Zhao
2025-05-13 20:10 ` Edgecombe, Rick P
2025-05-16 1:35 ` Huang, Kai
2025-05-16 9:43 ` Yan Zhao
2025-05-16 22:35 ` Huang, Kai
2025-05-16 23:47 ` Edgecombe, Rick P
2025-05-19 8:32 ` Yan Zhao
2025-05-19 16:53 ` Edgecombe, Rick P
2025-05-20 9:34 ` Yan Zhao
2025-05-20 23:47 ` Huang, Kai
2025-06-11 14:42 ` Sean Christopherson
2025-06-12 23:39 ` Edgecombe, Rick P
2025-06-13 0:19 ` Sean Christopherson
2025-06-13 0:25 ` Edgecombe, Rick P
2025-06-13 0:44 ` Sean Christopherson
2025-06-13 0:47 ` Edgecombe, Rick P
2025-06-13 1:32 ` Yan Zhao
2025-06-13 21:53 ` Edgecombe, Rick P
2025-06-13 22:19 ` Sean Christopherson
2025-06-13 23:33 ` Edgecombe, Rick P
2025-06-16 3:14 ` Yan Zhao
2025-06-16 22:49 ` Edgecombe, Rick P
2025-06-17 0:52 ` Yan Zhao
2025-06-18 0:30 ` Yan Zhao
2025-06-20 16:31 ` Sean Christopherson
2025-06-23 21:44 ` Edgecombe, Rick P
2025-06-24 9:57 ` Yan Zhao
2025-06-24 18:35 ` Edgecombe, Rick P
2025-06-25 9:28 ` Yan Zhao
2025-06-25 9:36 ` Yan Zhao
2025-06-25 14:48 ` Edgecombe, Rick P
2025-06-26 0:50 ` Yan Zhao
2025-06-25 14:47 ` Edgecombe, Rick P
2025-06-26 8:53 ` Yan Zhao
2025-07-01 0:42 ` Edgecombe, Rick P
2025-07-01 2:41 ` Yan Zhao
2025-07-01 15:36 ` Edgecombe, Rick P
2025-07-02 0:12 ` Yan Zhao
2025-07-02 0:18 ` Edgecombe, Rick P
2025-07-02 1:07 ` Yan Zhao
2025-07-02 15:26 ` Edgecombe, Rick P
2025-07-02 3:31 ` Yan Zhao
2025-06-25 13:47 ` Vishal Annapurve
2025-06-25 15:51 ` Edgecombe, Rick P
2025-06-18 1:22 ` Edgecombe, Rick P
2025-06-18 11:32 ` Shutemov, Kirill
2025-06-20 16:32 ` Sean Christopherson
2025-06-20 17:44 ` Kirill Shutemov
2025-06-20 18:40 ` Sean Christopherson
2025-06-20 19:26 ` Kirill Shutemov
2025-06-13 2:41 ` Xiaoyao Li
2025-06-13 3:29 ` Yan Zhao
2025-06-13 5:35 ` Yan Zhao
2025-06-13 6:08 ` Xiaoyao Li
2025-05-21 15:40 ` Edgecombe, Rick P
2025-05-22 3:52 ` Yan Zhao
2025-05-23 23:40 ` Edgecombe, Rick P
2025-05-27 1:31 ` Yan Zhao
2025-05-20 23:34 ` Huang, Kai
2025-05-21 2:35 ` Yan Zhao
2025-05-16 9:28 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 10/21] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-05-13 20:15 ` Edgecombe, Rick P
2025-05-16 4:01 ` Yan Zhao
2025-05-16 17:50 ` Edgecombe, Rick P
2025-05-19 3:57 ` Yan Zhao
2025-05-19 17:42 ` Edgecombe, Rick P
2025-05-20 10:11 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 11/21] KVM: x86: Add "vcpu" "gfn" parameters to x86 hook private_max_mapping_level Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 12/21] KVM: TDX: Determine max mapping level according to vCPU's ACCEPT level Yan Zhao
2025-05-13 21:20 ` Edgecombe, Rick P
2025-05-16 6:12 ` Xiaoyao Li
2025-05-16 6:30 ` Yan Zhao
2025-05-16 22:02 ` Edgecombe, Rick P
2025-05-19 6:39 ` Yan Zhao
2025-05-19 20:17 ` Edgecombe, Rick P
2025-04-24 3:07 ` [RFC PATCH 13/21] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 14/21] KVM: x86/tdp_mmu: Invoke split_external_spt hook with exclusive mmu_lock Yan Zhao
2025-05-13 23:06 ` Edgecombe, Rick P
2025-05-16 9:17 ` Yan Zhao
2025-05-16 22:11 ` Edgecombe, Rick P
2025-05-19 4:01 ` Yan Zhao
2025-05-19 20:21 ` Edgecombe, Rick P
2025-05-20 5:40 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 15/21] KVM: TDX: Support huge page splitting with exclusive kvm->mmu_lock Yan Zhao
2025-05-20 6:18 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-07-02 15:47 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 16/21] KVM: x86/mmu: Introduce kvm_split_boundary_leafs() to split boundary leafs Yan Zhao
2025-05-13 22:56 ` Edgecombe, Rick P
2025-05-16 7:46 ` Yan Zhao
2025-05-16 8:03 ` Yan Zhao
2025-05-16 22:27 ` Edgecombe, Rick P
2025-05-19 8:12 ` Yan Zhao
2025-05-16 11:44 ` Yan Zhao
2025-05-16 22:16 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 17/21] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 18/21] KVM: x86: Split huge boundary leafs before private to shared conversion Yan Zhao
2025-05-09 23:34 ` Edgecombe, Rick P
2025-05-12 2:25 ` Yan Zhao
2025-05-12 21:53 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 19/21] KVM: gmem: Split huge boundary leafs for punch hole of private memory Yan Zhao
2025-04-24 10:19 ` Francesco Lavra
2025-04-25 1:55 ` Yan Zhao
2025-05-13 22:59 ` Edgecombe, Rick P
2025-05-16 8:19 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 20/21] KVM: x86: Force a prefetch fault's max mapping level to 4KB for TDX Yan Zhao
2025-05-13 23:20 ` Edgecombe, Rick P
2025-05-16 8:43 ` Yan Zhao
2025-05-21 3:30 ` Binbin Wu
2025-05-21 5:03 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 21/21] KVM: x86: Ignore splitting huge pages in fault path " Yan Zhao
2025-05-13 21:58 ` Edgecombe, Rick P
2025-05-16 6:40 ` Yan Zhao
2025-04-24 7:35 ` [RFC PATCH 00/21] KVM: TDX huge page support for private memory Kirill A. Shutemov
2025-04-24 8:33 ` Yan Zhao
2025-04-24 9:05 ` Kirill A. Shutemov
2025-04-24 9:08 ` Juergen Gross
2025-04-24 9:49 ` Yan Zhao
2025-04-24 10:39 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250424030406.32670-1-yan.y.zhao@intel.com \
--to=yan.y.zhao@intel.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jroedel@suse.de \
--cc=jun.miao@intel.com \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).