public inbox for linux-coco@lists.linux.dev
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Thomas Gleixner <tglx@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	 Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,  Kiryl Shutsemau <kas@kernel.org>,
	Sean Christopherson <seanjc@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev,
	 kvm@vger.kernel.org, Kai Huang <kai.huang@intel.com>,
	 Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Yan Zhao <yan.y.zhao@intel.com>,
	 Vishal Annapurve <vannapurve@google.com>,
	Ackerley Tng <ackerleytng@google.com>,
	 Sagi Shahar <sagis@google.com>,
	Binbin Wu <binbin.wu@linux.intel.com>,
	 Xiaoyao Li <xiaoyao.li@intel.com>,
	Isaku Yamahata <isaku.yamahata@intel.com>
Subject: [RFC PATCH v5 16/45] x86/virt/tdx: Add tdx_alloc/free_control_page() helpers
Date: Wed, 28 Jan 2026 17:14:48 -0800	[thread overview]
Message-ID: <20260129011517.3545883-17-seanjc@google.com> (raw)
In-Reply-To: <20260129011517.3545883-1-seanjc@google.com>

From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

Add helpers to use when allocating or preparing pages that are gifted to
the TDX-Module for use as control/S-EPT pages, and thus need DPAMT backing.
Make them handle races internally for the case of multiple callers trying
operate on the same 2MB range simultaneously.

While the TDX initialization code in arch/x86 uses pages with 2MB
alignment, KVM will need to hand 4KB pages for it to use. Under DPAMT,
these pages will need DPAMT backing 4KB backing.

Add tdx_alloc_control_page() and tdx_free_control_page() to handle both
page allocation and DPAMT installation. Make them behave like normal
alloc/free functions where allocation can fail in the case of no memory,
but free (with any necessary DPAMT release) always succeeds. Do this so
they can support the existing TDX flows that require cleanups to succeed.
Also create tdx_pamt_put()/tdx_pamt_get() to handle installing DPAMT 4KB
backing for pages that are already allocated (such as external page tables,
or S-EPT pages).

Allocate the pages as GFP_KERNEL_ACCOUNT based on that the allocations
will be easily user triggerable.

Since the source of these pages is the page allocator, multiple TDs could
each get 4KB pages that are covered by the same 2MB range. When this
happens only one page pair needs to be installed to cover the 2MB range.
Similarly, when one page is freed, the DPAMT backing cannot be freed until
all TDX pages in the range are no longer in use. Have the helpers manage
these races internally.

So the requirements are that:

1. Free path cannot fail (i.e. no TDX module BUSY errors).
2. Allocation paths need to handle finding that DPAMT backing is already
   installed, and only return an error in the case of no memory, not in the
   case of losing races with other’s trying to operate on the same DPAMT
   range.
3. Free paths cannot fail, and also need to clean up the DPAMT backing
   when the last page in the 2MB range is no longer needed by TDX.

Previous changes allocated refcounts to be used to track how many 4KB
pages are in use by TDX for each 2MB region. So update those inside the
helpers and use them to decide when to actually install the DPAMT backing
pages.

tdx_pamt_put() needs to guarantee the DPAMT is installed before returning
so that racing threads don’t tell the TDX module to operate on the page
before it’s installed. Take a lock while adjusting the refcount and doing
the actual TDH.PHYMEM.PAMT.ADD/REMOVE to make sure these happen
atomically. The lock is heavyweight, but will be optimized in future
changes. Just do the simple solution before any complex improvements.

TDH.PHYMEM.PAMT.ADD/REMOVE take exclusive locks at the granularity each
2MB range. A simultaneous attempt to operate on the same 2MB region would
result in a BUSY error code returned from the SEAMCALL. Since the
invocation of SEAMCALLs are behind a lock, this won’t conflict.

Besides the contention between TDH.PHYMEM.PAMT.ADD/REMOVE, many other
SEAMCALLs take the same 2MB granularity locks as shared. This means any
attempt to operate on the page by the TDX module while simultaneously
doing PAMT.ADD/REMOVE will result in a BUSY error. This should not happen,
as the PAMT pages always has to be installed before giving the pages to
the TDX module anyway.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
[Add feedback, update log]
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/tdx.h  |  24 +++-
 arch/x86/virt/vmx/tdx/tdx.c | 264 ++++++++++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/tdx.h |   2 +
 3 files changed, 289 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 57d5f07e3735..fa29be18498c 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -16,6 +16,7 @@
 
 #include <uapi/asm/mce.h>
 #include <asm/tdx_global_metadata.h>
+#include <linux/mm.h>
 #include <linux/pgtable.h>
 
 /*
@@ -135,11 +136,32 @@ static inline bool tdx_supports_dynamic_pamt(const struct tdx_sys_info *sysinfo)
 	return false; /* To be enabled when kernel is ready */
 }
 
+void tdx_quirk_reset_page(struct page *page);
+
 int tdx_guest_keyid_alloc(void);
 u32 tdx_get_nr_guest_keyids(void);
 void tdx_guest_keyid_free(unsigned int keyid);
 
-void tdx_quirk_reset_page(struct page *page);
+struct page *__tdx_alloc_control_page(gfp_t gfp);
+void __tdx_free_control_page(struct page *page);
+
+static inline unsigned long tdx_alloc_control_page(gfp_t gfp)
+{
+	struct page *page = __tdx_alloc_control_page(gfp);
+
+	if (!page)
+		return 0;
+
+	return (unsigned long)page_address(page);
+}
+
+static inline void tdx_free_control_page(unsigned long addr)
+{
+	if (!addr)
+		return;
+
+	__tdx_free_control_page(virt_to_page(addr));
+}
 
 struct tdx_td {
 	/* TD root structure: */
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index f6e80aba5895..682c8a228b53 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1824,6 +1824,50 @@ u64 tdh_mng_rd(struct tdx_td *td, u64 field, u64 *data)
 }
 EXPORT_SYMBOL_FOR_KVM(tdh_mng_rd);
 
+/* Number PAMT pages to be provided to TDX module per 2M region of PA */
+static int tdx_dpamt_entry_pages(void)
+{
+	if (!tdx_supports_dynamic_pamt(&tdx_sysinfo))
+		return 0;
+
+	return tdx_sysinfo.tdmr.pamt_4k_entry_size * PTRS_PER_PTE / PAGE_SIZE;
+}
+
+/*
+ * For SEAMCALLs that pass a bundle of pages, the TDX spec treats the registers
+ * like an array, as they are ordered in the struct.  The effective array size
+ * is (obviously) limited by the number or registers, relative to the starting
+ * register.  Fill the register array at a given starting register, with sanity
+ * checks to avoid overflowing the args structure.
+ */
+static void dpamt_copy_regs_array(struct tdx_module_args *args, void *reg,
+				  u64 *pamt_pa_array, bool copy_to_regs)
+{
+	int size = tdx_dpamt_entry_pages() * sizeof(*pamt_pa_array);
+
+	if (WARN_ON_ONCE(reg + size > (void *)args) + sizeof(*args))
+		return;
+
+	/* Copy PAMT page PA's to/from the struct per the TDX ABI. */
+	if (copy_to_regs)
+		memcpy(reg, pamt_pa_array, size);
+	else
+		memcpy(pamt_pa_array, reg, size);
+}
+
+#define dpamt_copy_from_regs(dst, args, reg)	\
+	dpamt_copy_regs_array(args, &(args)->reg, dst, false)
+
+#define dpamt_copy_to_regs(args, reg, src)	\
+	dpamt_copy_regs_array(args, &(args)->reg, src, true)
+
+/*
+ * When declaring PAMT arrays on the stack, use the maximum theoretical number
+ * of entries that can be squeezed into a SEAMCALL, as stack allocations are
+ * practically free, i.e. any wasted space is a non-issue.
+ */
+#define MAX_NR_DPAMT_ARGS (sizeof(struct tdx_module_args) / sizeof(u64))
+
 u64 tdh_mr_extend(struct tdx_td *td, u64 gpa, u64 *ext_err1, u64 *ext_err2)
 {
 	struct tdx_module_args args = {
@@ -2020,6 +2064,226 @@ u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct page *page)
 }
 EXPORT_SYMBOL_FOR_KVM(tdh_phymem_page_wbinvd_hkid);
 
+static int alloc_pamt_array(u64 *pa_array)
+{
+	struct page *page;
+	int i;
+
+	for (i = 0; i < tdx_dpamt_entry_pages(); i++) {
+		page = alloc_page(GFP_KERNEL_ACCOUNT);
+		if (!page)
+			goto err;
+		pa_array[i] = page_to_phys(page);
+	}
+
+	return 0;
+err:
+	/*
+	 * Zero the rest of the array to help with
+	 * freeing in error paths.
+	 */
+	for (; i < tdx_dpamt_entry_pages(); i++)
+		pa_array[i] = 0;
+	return -ENOMEM;
+}
+
+static void free_pamt_array(u64 *pa_array)
+{
+	for (int i = 0; i < tdx_dpamt_entry_pages(); i++) {
+		if (!pa_array[i])
+			break;
+
+		/*
+		 * Reset pages unconditionally to cover cases
+		 * where they were passed to the TDX module.
+		 */
+		tdx_quirk_reset_paddr(pa_array[i], PAGE_SIZE);
+
+		__free_page(phys_to_page(pa_array[i]));
+	}
+}
+
+/*
+ * Calculate the arg needed for operating on the DPAMT backing for
+ * a given 4KB page.
+ */
+static u64 pamt_2mb_arg(struct page *page)
+{
+	unsigned long hpa_2mb = ALIGN_DOWN(page_to_phys(page), PMD_SIZE);
+
+	return hpa_2mb | TDX_PS_2M;
+}
+
+/*
+ * Add PAMT backing for the given page. Return's negative error code
+ * for kernel side error conditions (-ENOMEM) and 1 for TDX Module
+ * error. In the case of TDX module error, the return code is stored
+ * in tdx_err.
+ */
+static u64 tdh_phymem_pamt_add(struct page *page, u64 *pamt_pa_array)
+{
+	struct tdx_module_args args = {
+		.rcx = pamt_2mb_arg(page)
+	};
+
+	dpamt_copy_to_regs(&args, rdx, pamt_pa_array);
+
+	return seamcall(TDH_PHYMEM_PAMT_ADD, &args);
+}
+
+/* Remove PAMT backing for the given page. */
+static u64 tdh_phymem_pamt_remove(struct page *page, u64 *pamt_pa_array)
+{
+	struct tdx_module_args args = {
+		.rcx = pamt_2mb_arg(page),
+	};
+	u64 ret;
+
+	ret = seamcall_ret(TDH_PHYMEM_PAMT_REMOVE, &args);
+	if (ret)
+		return ret;
+
+	dpamt_copy_from_regs(pamt_pa_array, &args, rdx);
+	return 0;
+}
+
+/* Serializes adding/removing PAMT memory */
+static DEFINE_SPINLOCK(pamt_lock);
+
+/* Bump PAMT refcount for the given page and allocate PAMT memory if needed */
+static int tdx_pamt_get(struct page *page)
+{
+	u64 pamt_pa_array[MAX_NR_DPAMT_ARGS];
+	atomic_t *pamt_refcount;
+	u64 tdx_status;
+	int ret;
+
+	if (!tdx_supports_dynamic_pamt(&tdx_sysinfo))
+		return 0;
+
+	ret = alloc_pamt_array(pamt_pa_array);
+	if (ret)
+		goto out_free;
+
+	pamt_refcount = tdx_find_pamt_refcount(page_to_pfn(page));
+
+	scoped_guard(spinlock, &pamt_lock) {
+		/*
+		 * If the pamt page is already added (i.e. refcount >= 1),
+		 * then just increment the refcount.
+		 */
+		if (atomic_read(pamt_refcount)) {
+			atomic_inc(pamt_refcount);
+			goto out_free;
+		}
+
+		/* Try to add the pamt page and take the refcount 0->1. */
+		tdx_status = tdh_phymem_pamt_add(page, pamt_pa_array);
+		if (WARN_ON_ONCE(!IS_TDX_SUCCESS(tdx_status))) {
+			ret = -EIO;
+			goto out_free;
+		}
+
+		atomic_inc(pamt_refcount);
+	}
+
+	return 0;
+
+out_free:
+	/*
+	 * pamt_pa_array is populated or zeroed up to tdx_dpamt_entry_pages()
+	 * above. free_pamt_array() can handle either case.
+	 */
+	free_pamt_array(pamt_pa_array);
+	return ret;
+}
+
+/*
+ * Drop PAMT refcount for the given page and free PAMT memory if it is no
+ * longer needed.
+ */
+static void tdx_pamt_put(struct page *page)
+{
+	u64 pamt_pa_array[MAX_NR_DPAMT_ARGS];
+	atomic_t *pamt_refcount;
+	u64 tdx_status;
+
+	if (!tdx_supports_dynamic_pamt(&tdx_sysinfo))
+		return;
+
+	pamt_refcount = tdx_find_pamt_refcount(page_to_pfn(page));
+
+	scoped_guard(spinlock, &pamt_lock) {
+		/*
+		 * If the there are more than 1 references on the pamt page,
+		 * don't remove it yet. Just decrement the refcount.
+		 */
+		if (atomic_read(pamt_refcount) > 1) {
+			atomic_dec(pamt_refcount);
+			return;
+		}
+
+		/* Try to remove the pamt page and take the refcount 1->0. */
+		tdx_status = tdh_phymem_pamt_remove(page, pamt_pa_array);
+
+		/*
+		 * Don't free pamt_pa_array as it could hold garbage when
+		 * tdh_phymem_pamt_remove() fails.  Don't panic/BUG_ON(), as
+		 * there is no risk of data corruption, but do yell loudly as
+		 * failure indicates a kernel bug, memory is being leaked, and
+		 * the dangling PAMT entry may cause future operations to fail.
+		 */
+		if (WARN_ON_ONCE(!IS_TDX_SUCCESS(tdx_status)))
+			return;
+
+		atomic_dec(pamt_refcount);
+	}
+
+	/*
+	 * pamt_pa_array is populated up to tdx_dpamt_entry_pages() by the TDX
+	 * module with pages, or remains zero inited. free_pamt_array() can
+	 * handle either case. Just pass it unconditionally.
+	 */
+	free_pamt_array(pamt_pa_array);
+}
+
+/*
+ * Return a page that can be gifted to the TDX-Module for use as a "control"
+ * page, i.e. pages that are used for control and S-EPT structures for a given
+ * TDX guest, and bound to said guest's HKID and thus obtain TDX protections,
+ * including PAMT tracking.
+ */
+struct page *__tdx_alloc_control_page(gfp_t gfp)
+{
+	struct page *page;
+
+	page = alloc_page(gfp);
+	if (!page)
+		return NULL;
+
+	if (tdx_pamt_get(page)) {
+		__free_page(page);
+		return NULL;
+	}
+
+	return page;
+}
+EXPORT_SYMBOL_FOR_KVM(__tdx_alloc_control_page);
+
+/*
+ * Free a page that was gifted to the TDX-Module for use as a control/S-EPT
+ * page. After this, the page is no longer protected by TDX.
+ */
+void __tdx_free_control_page(struct page *page)
+{
+	if (!page)
+		return;
+
+	tdx_pamt_put(page);
+	__free_page(page);
+}
+EXPORT_SYMBOL_FOR_KVM(__tdx_free_control_page);
+
 #ifdef CONFIG_KEXEC_CORE
 void tdx_cpu_flush_cache_for_kexec(void)
 {
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 82bb82be8567..46c4214b79fb 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -46,6 +46,8 @@
 #define TDH_PHYMEM_PAGE_WBINVD		41
 #define TDH_VP_WR			43
 #define TDH_SYS_CONFIG			45
+#define TDH_PHYMEM_PAMT_ADD		58
+#define TDH_PHYMEM_PAMT_REMOVE		59
 
 /*
  * SEAMCALL leaf:
-- 
2.53.0.rc1.217.geba53bf80e-goog


  parent reply	other threads:[~2026-01-29  1:15 UTC|newest]

Thread overview: 148+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-29  1:14 [RFC PATCH v5 00/45] TDX: Dynamic PAMT + S-EPT Hugepage Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 01/45] x86/tdx: Use pg_level in TDX APIs, not the TDX-Module's 0-based level Sean Christopherson
2026-01-29 17:37   ` Dave Hansen
2026-01-29  1:14 ` [RFC PATCH v5 02/45] KVM: x86/mmu: Update iter->old_spte if cmpxchg64 on mirror SPTE "fails" Sean Christopherson
2026-01-29 22:10   ` Edgecombe, Rick P
2026-01-29 22:23     ` Sean Christopherson
2026-01-29 22:48       ` Edgecombe, Rick P
2026-02-03  8:48   ` Yan Zhao
2026-02-03 10:30   ` Huang, Kai
2026-02-03 20:06     ` Sean Christopherson
2026-02-03 21:34       ` Huang, Kai
2026-01-29  1:14 ` [RFC PATCH v5 03/45] KVM: TDX: Account all non-transient page allocations for per-TD structures Sean Christopherson
2026-01-29 22:15   ` Edgecombe, Rick P
2026-02-03 10:36   ` Huang, Kai
2026-01-29  1:14 ` [RFC PATCH v5 04/45] KVM: x86: Make "external SPTE" ops that can fail RET0 static calls Sean Christopherson
2026-01-29 22:20   ` Edgecombe, Rick P
2026-01-30  1:28     ` Sean Christopherson
2026-01-30 17:32       ` Edgecombe, Rick P
2026-02-03 10:44         ` Huang, Kai
2026-02-04  1:16         ` Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 05/45] KVM: TDX: Drop kvm_x86_ops.link_external_spt(), use .set_external_spte() for all Sean Christopherson
2026-01-30 23:55   ` Edgecombe, Rick P
2026-02-03 10:19   ` Yan Zhao
2026-02-03 20:05     ` Sean Christopherson
2026-02-04  6:41       ` Yan Zhao
2026-02-05 23:14         ` Sean Christopherson
2026-02-06  2:27           ` Yan Zhao
2026-02-18 19:37       ` Edgecombe, Rick P
2026-02-20 17:36         ` Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 06/45] KVM: x86/mmu: Fold set_external_spte_present() into its sole caller Sean Christopherson
2026-02-04  7:38   ` Yan Zhao
2026-02-05 23:06     ` Sean Christopherson
2026-02-06  2:29       ` Yan Zhao
2026-01-29  1:14 ` [RFC PATCH v5 07/45] KVM: x86/mmu: Plumb the SPTE _pointer_ into the TDP MMU's handle_changed_spte() Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 08/45] KVM: x86/mmu: Propagate mirror SPTE removal to S-EPT in handle_changed_spte() Sean Christopherson
2026-02-04  9:06   ` Yan Zhao
2026-02-05  2:23     ` Sean Christopherson
2026-02-05  5:39       ` Yan Zhao
2026-02-05 22:33         ` Sean Christopherson
2026-02-06  2:17           ` Yan Zhao
2026-02-06 17:41             ` Sean Christopherson
2026-02-10 10:54               ` Yan Zhao
2026-02-10 19:52                 ` Sean Christopherson
2026-02-11  2:16                   ` Yan Zhao
2026-02-14  0:36                     ` Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 09/45] KVM: x86: Rework .free_external_spt() into .reclaim_external_sp() Sean Christopherson
2026-02-04  9:45   ` Yan Zhao
2026-02-05  7:04     ` Yan Zhao
2026-02-05 22:38       ` Sean Christopherson
2026-02-06  2:30         ` Yan Zhao
2026-01-29  1:14 ` [RFC PATCH v5 10/45] x86/tdx: Move all TDX error defines into <asm/shared/tdx_errno.h> Sean Christopherson
2026-01-29 18:13   ` Dave Hansen
2026-01-29  1:14 ` [RFC PATCH v5 11/45] x86/tdx: Add helpers to check return status codes Sean Christopherson
2026-01-29 18:58   ` Dave Hansen
2026-01-29 20:35     ` Sean Christopherson
2026-01-30  0:36       ` Edgecombe, Rick P
2026-02-03 20:32         ` Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 12/45] x86/virt/tdx: Simplify tdmr_get_pamt_sz() Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 13/45] x86/virt/tdx: Allocate page bitmap for Dynamic PAMT Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 14/45] x86/virt/tdx: Allocate reference counters for PAMT memory Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 15/45] x86/virt/tdx: Improve PAMT refcounts allocation for sparse memory Sean Christopherson
2026-01-29  1:14 ` Sean Christopherson [this message]
2026-01-30  1:30   ` [RFC PATCH v5 16/45] x86/virt/tdx: Add tdx_alloc/free_control_page() helpers Sean Christopherson
2026-02-05  6:11   ` Yan Zhao
2026-02-05 22:35     ` Sean Christopherson
2026-02-06  2:32       ` Yan Zhao
2026-02-10 17:44   ` Dave Hansen
2026-02-10 22:15     ` Edgecombe, Rick P
2026-02-10 22:19       ` Dave Hansen
2026-02-10 22:46         ` Huang, Kai
2026-02-10 22:50           ` Dave Hansen
2026-02-10 23:02             ` Huang, Kai
2026-02-11  0:50     ` Edgecombe, Rick P
2026-01-29  1:14 ` [RFC PATCH v5 17/45] x86/virt/tdx: Optimize " Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 18/45] KVM: TDX: Allocate PAMT memory for TD and vCPU control structures Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 19/45] KVM: Allow owner of kvm_mmu_memory_cache to provide a custom page allocator Sean Christopherson
2026-02-03 10:56   ` Huang, Kai
2026-02-03 20:12     ` Sean Christopherson
2026-02-03 20:33       ` Edgecombe, Rick P
2026-02-03 21:17         ` Sean Christopherson
2026-02-03 21:29       ` Huang, Kai
2026-02-04  2:16         ` Sean Christopherson
2026-02-04  6:45           ` Huang, Kai
2026-01-29  1:14 ` [RFC PATCH v5 20/45] KVM: x86/mmu: Allocate/free S-EPT pages using tdx_{alloc,free}_control_page() Sean Christopherson
2026-02-03 11:16   ` Huang, Kai
2026-02-03 20:17     ` Sean Christopherson
2026-02-03 21:18       ` Huang, Kai
2026-02-06  9:48   ` Yan Zhao
2026-02-06 15:01     ` Sean Christopherson
2026-02-09  9:25       ` Yan Zhao
2026-02-09 23:20         ` Sean Christopherson
2026-02-10  8:30           ` Yan Zhao
2026-02-10  0:07         ` Dave Hansen
2026-02-10  1:40           ` Yan Zhao
2026-02-09 10:41       ` Huang, Kai
2026-02-09 22:44         ` Sean Christopherson
2026-02-10 10:54           ` Huang, Kai
2026-02-09 23:40       ` Dave Hansen
2026-02-10  0:03         ` Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 21/45] x86/tdx: Add APIs to support get/put of DPAMT entries from KVM, under spinlock Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 22/45] KVM: TDX: Get/put PAMT pages when (un)mapping private memory Sean Christopherson
2026-02-06 10:20   ` Yan Zhao
2026-02-06 16:03     ` Sean Christopherson
2026-02-06 19:27       ` Edgecombe, Rick P
2026-02-06 23:18         ` Sean Christopherson
2026-02-06 23:19           ` Edgecombe, Rick P
2026-02-09 10:33           ` Huang, Kai
2026-02-09 17:08             ` Edgecombe, Rick P
2026-02-09 21:05               ` Huang, Kai
2026-01-29  1:14 ` [RFC PATCH v5 23/45] x86/virt/tdx: Enable Dynamic PAMT Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 24/45] Documentation/x86: Add documentation for TDX's " Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 25/45] *** DO NOT MERGE *** x86/virt/tdx: Don't assume guest memory is backed by struct page Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 26/45] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages Sean Christopherson
2026-01-29  1:14 ` [RFC PATCH v5 27/45] x86/virt/tdx: Enhance tdh_phymem_page_wbinvd_hkid() to invalidate " Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 28/45] x86/virt/tdx: Extend "reset page" quirk to support " Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 29/45] x86/virt/tdx: Get/Put DPAMT page pair if and only if mapping size is 4KB Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 30/45] x86/virt/tdx: Add API to demote a 2MB mapping to 512 4KB mappings Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 31/45] KVM: x86/mmu: Prevent hugepage promotion for mirror roots in fault path Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 32/45] KVM: x86/mmu: Plumb the old_spte into kvm_x86_ops.set_external_spte() Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 33/45] KVM: TDX: Hoist tdx_sept_remove_private_spte() above set_private_spte() Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 34/45] KVM: TDX: Handle removal of leaf SPTEs in .set_private_spte() Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 35/45] KVM: TDX: Add helper to handle mapping leaf SPTE into S-EPT Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 36/45] KVM: TDX: Move S-EPT page demotion TODO to tdx_sept_set_private_spte() Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 37/45] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Sean Christopherson
2026-02-06 10:07   ` Yan Zhao
2026-02-06 16:09     ` Sean Christopherson
2026-02-11  9:49       ` Yan Zhao
2026-01-29  1:15 ` [RFC PATCH v5 38/45] KVM: x86/mmu: Add Dynamic PAMT support in TDP MMU for vCPU-induced page split Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 39/45] KVM: TDX: Add core support for splitting/demoting 2MiB S-EPT to 4KiB Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 40/45] KVM: x86: Introduce hugepage_set_guest_inhibit() Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 41/45] KVM: TDX: Honor the guest's accept level contained in an EPT violation Sean Christopherson
2026-01-29 15:32   ` Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 42/45] KVM: guest_memfd: Add helpers to get start/end gfns give gmem+slot+pgoff Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 43/45] *** DO NOT MERGE *** KVM: guest_memfd: Add pre-zap arch hook for shared<=>private conversion Sean Christopherson
2026-02-13  7:23   ` Huang, Kai
2026-01-29  1:15 ` [RFC PATCH v5 44/45] KVM: x86/mmu: Add support for splitting S-EPT hugepages on conversion Sean Christopherson
2026-01-29 15:39   ` Sean Christopherson
2026-02-11  8:43     ` Yan Zhao
2026-02-13 15:09       ` Sean Christopherson
2026-02-06 10:14   ` Yan Zhao
2026-02-06 14:46     ` Sean Christopherson
2026-01-29  1:15 ` [RFC PATCH v5 45/45] KVM: TDX: Turn on PG_LEVEL_2M Sean Christopherson
2026-01-29 17:13 ` [RFC PATCH v5 00/45] TDX: Dynamic PAMT + S-EPT Hugepage Konrad Rzeszutek Wilk
2026-01-29 17:17   ` Dave Hansen
2026-02-04 14:38   ` Sean Christopherson
2026-02-04 15:09     ` Dave Hansen
2026-02-05 15:53       ` Sean Christopherson
2026-02-05 16:01         ` Dave Hansen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260129011517.3545883-17-seanjc@google.com \
    --to=seanjc@google.com \
    --cc=ackerleytng@google.com \
    --cc=binbin.wu@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kai.huang@intel.com \
    --cc=kas@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rick.p.edgecombe@intel.com \
    --cc=sagis@google.com \
    --cc=tglx@kernel.org \
    --cc=vannapurve@google.com \
    --cc=x86@kernel.org \
    --cc=xiaoyao.li@intel.com \
    --cc=yan.y.zhao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox