From: Sean Christopherson <seanjc@google.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: Thomas Gleixner <tglx@kernel.org>, Ingo Molnar <mingo@redhat.com>,
Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
x86@kernel.org, Kiryl Shutsemau <kas@kernel.org>,
Paolo Bonzini <pbonzini@redhat.com>,
linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev,
kvm@vger.kernel.org, Kai Huang <kai.huang@intel.com>,
Rick Edgecombe <rick.p.edgecombe@intel.com>,
Vishal Annapurve <vannapurve@google.com>,
Ackerley Tng <ackerleytng@google.com>,
Sagi Shahar <sagis@google.com>,
Binbin Wu <binbin.wu@linux.intel.com>,
Xiaoyao Li <xiaoyao.li@intel.com>,
Isaku Yamahata <isaku.yamahata@intel.com>
Subject: Re: [RFC PATCH v5 08/45] KVM: x86/mmu: Propagate mirror SPTE removal to S-EPT in handle_changed_spte()
Date: Tue, 10 Feb 2026 11:52:09 -0800 [thread overview]
Message-ID: <aYuMaRbVQyUfYJTP@google.com> (raw)
In-Reply-To: <aYsOV7Q5FTWo+6/x@yzhao56-desk.sh.intel.com>
On Tue, Feb 10, 2026, Yan Zhao wrote:
> On Fri, Feb 06, 2026 at 09:41:38AM -0800, Sean Christopherson wrote:
> > On Fri, Feb 06, 2026, Yan Zhao wrote:
> > @@ -559,30 +559,31 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
> > * SPTE being converted to a hugepage (leaf) or being zapped. Shadow
> > * pages are kernel allocations and should never be migrated.
> > *
> > - * When modifying leaf entries in mirrored page tables, propagate the
> > - * changes to the external SPTE. Bug the VM on failure, as callers
> > - * aren't prepared to handle errors, e.g. due to lock contention in the
> > - * TDX-Module. Note, changes to non-leaf mirror SPTEs are handled by
> > - * handle_removed_pt() (the TDX-Module requires that child entries are
> > - * removed before the parent SPTE), and changes to non-present mirror
> > - * SPTEs are handled by __tdp_mmu_set_spte_atomic() (KVM needs to set
> > - * the external SPTE while the mirror SPTE is frozen so that installing
> > - * a new SPTE is effectively an atomic operation).
> > + * When modifying leaf entries in mirrored page tables, propagate all
> > + * changes to the external SPTE.
> > */
> > if (was_present && !was_leaf &&
> > (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
> > handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
> > - else if (was_leaf && is_mirror_sptep(sptep))
> > - KVM_BUG_ON(kvm_x86_call(set_external_spte)(kvm, gfn, old_spte,
> > - new_spte, level), kvm);
> > + else if (is_mirror_sptep(sptep))
> > + return kvm_x86_call(set_external_spte)(kvm, gfn, old_spte,
> > + new_spte, level);
> For TDX's future implementation of set_external_spte() for split splitting,
> could we add a new param "bool shared" to op set_external_spte() in the
> future? i.e.,
> - when tdx_sept_split_private_spte() is invoked under write mmu_lock, it calls
> tdh_do_no_vcpus() to retry BUSY error, and TDX_BUG_ON_2() then.
> - when tdx_sept_split_private_spte() is invoked under read mmu_lock
> (in the future when calling tdh_mem_range_block() in unnecessary), it could
> directly return BUSY to TDP MMU on contention.
Yeah, I have no objection to using @shared for things like that.
> > + return 0;
> > +}
> > +
> > +static void handle_changed_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
> > + gfn_t gfn, u64 old_spte, u64 new_spte,
> > + int level, bool shared)
> > +{
> Do we need "WARN_ON_ONCE(is_mirror_sptep(sptep) && shared)" here ?
No, because I want to call this code for all paths, including the fault path.
> > + KVM_BUG_ON(__handle_changed_spte(kvm, as_id, sptep, gfn, old_spte,
> > + new_spte, level, shared), kvm);
> > }
>
>
>
> >
> > static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm,
> > struct tdp_iter *iter,
> > u64 new_spte)
> > {
> > - u64 *raw_sptep = rcu_dereference(iter->sptep);
> > -
> > /*
> > * The caller is responsible for ensuring the old SPTE is not a FROZEN
> > * SPTE. KVM should never attempt to zap or manipulate a FROZEN SPTE,
> > @@ -591,40 +592,6 @@ static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm,
> > */
> > WARN_ON_ONCE(iter->yielded || is_frozen_spte(iter->old_spte));
> >
> > - if (is_mirror_sptep(iter->sptep) && !is_frozen_spte(new_spte)) {
> > - int ret;
> > -
> > - /*
> > - * KVM doesn't currently support zapping or splitting mirror
> > - * SPTEs while holding mmu_lock for read.
> > - */
> > - if (KVM_BUG_ON(is_shadow_present_pte(iter->old_spte), kvm) ||
> > - KVM_BUG_ON(!is_shadow_present_pte(new_spte), kvm))
> > - return -EBUSY;
> > -
> > - /*
> > - * Temporarily freeze the SPTE until the external PTE operation
> > - * has completed, e.g. so that concurrent faults don't attempt
> > - * to install a child PTE in the external page table before the
> > - * parent PTE has been written.
> > - */
> > - if (!try_cmpxchg64(raw_sptep, &iter->old_spte, FROZEN_SPTE))
> > - return -EBUSY;
> > -
> > - /*
> > - * Update the external PTE. On success, set the mirror SPTE to
> > - * the desired value. On failure, restore the old SPTE so that
> > - * the SPTE isn't frozen in perpetuity.
> > - */
> > - ret = kvm_x86_call(set_external_spte)(kvm, iter->gfn, iter->old_spte,
> > - new_spte, iter->level);
> > - if (ret)
> > - __kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte);
> > - else
> > - __kvm_tdp_mmu_write_spte(iter->sptep, new_spte);
> > - return ret;
> > - }
> > -
> > /*
> > * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
> > * does not hold the mmu_lock. On failure, i.e. if a different logical
> > @@ -632,7 +599,7 @@ static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm,
> > * the current value, so the caller operates on fresh data, e.g. if it
> > * retries tdp_mmu_set_spte_atomic()
> > */
> > - if (!try_cmpxchg64(raw_sptep, &iter->old_spte, new_spte))
> > + if (!try_cmpxchg64(rcu_dereference(iter->sptep), &iter->old_spte, new_spte))
> > return -EBUSY;
> >
> > return 0;
> > @@ -663,14 +630,44 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
> >
> > lockdep_assert_held_read(&kvm->mmu_lock);
> >
> > - ret = __tdp_mmu_set_spte_atomic(kvm, iter, new_spte);
> >
> > + /* KVM should never freeze SPTEs using higher level APIs. */
> > + KVM_MMU_WARN_ON(is_frozen_spte(new_spte));
> What about
> KVM_MMU_WARN_ON(is_frozen_spte(new_spte) ||
> is_frozen_spte(iter->old_spte) || iter->yielded);
>
> > + /*
> > + * Temporarily freeze the SPTE until the external PTE operation has
> > + * completed (unless the new SPTE itself will be frozen), e.g. so that
> > + * concurrent faults don't attempt to install a child PTE in the
> > + * external page table before the parent PTE has been written, or try
> > + * to re-install a page table before the old one was removed.
> > + */
> > + if (is_mirror_sptep(iter->sptep))
> > + ret = __tdp_mmu_set_spte_atomic(kvm, iter, FROZEN_SPTE);
> > + else
> > + ret = __tdp_mmu_set_spte_atomic(kvm, iter, new_spte);
> and invoking open code try_cmpxchg64() directly?
No, because __tdp_mmu_set_spte_atomic() is still used by kvm_tdp_mmu_age_spte(),
and the yielded/frozen rules apply there as well.
> > + /*
> > + * Unfreeze the mirror SPTE. If updating the external SPTE failed,
> > + * restore the old SPTE so that the SPTE isn't frozen in perpetuity,
> > + * otherwise set the mirror SPTE to the new desired value.
> > + */
> > + if (is_mirror_sptep(iter->sptep)) {
> > + if (ret)
> > + __kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte);
> > + else
> > + __kvm_tdp_mmu_write_spte(iter->sptep, new_spte);
> > + } else {
> > + /*
> > + * Bug the VM if handling the change failed, as failure is only
> > + * allowed if KVM couldn't update the external SPTE.
> > + */
> > + KVM_BUG_ON(ret, kvm);
> > + }
> > + return ret;
> > }
> One concern for tdp_mmu_set_spte_atomic() to handle mirror SPTEs:
> - Previously
> 1. set *iter->sptep to FROZEN_SPTE.
> 2. kvm_x86_call(set_external_spte)(old_spte, new_spte)
> 3. set *iter->sptep to new_spte
>
> - Now with this diff
> 1. set *iter->sptep to FROZEN_SPTE.
> 2. __handle_changed_spte()
> --> kvm_x86_call(set_external_spte)(iter->sptep, old_spte, new_spte)
Note, iter->sptep isn't passed to set_external_spte(), the invocation for that is:
return kvm_x86_call(set_external_spte)(kvm, gfn, old_spte,
new_spte, level);
> 3. set *iter->sptep to new_spte
>
> what if __handle_changed_spte() reads *iter->sptep in step 2?
For the most part, "don't do that". There are an infinite number of "what ifs".
I agree that re-reading iter->sptep is slightly more likely than other "what ifs",
but then if we convert to a boolean it creates the "what if we swap the order of
@as_id and @is_mirror_sp"? Given that @old_spte is provided, IMO re-reading the
SPTE from memory will stand out.
That said, I think we can have the best of both worlds. Rather than pass @as_id
and @sptep, pass the @sp, i.e. the owning kvm_mmu_page. That would address your
concern about re-reading the sptep, without needing another boolean.
E.g. slotted in as a cleanup somewhere earlier:
---
arch/x86/kvm/mmu/tdp_mmu.c | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 732548a678d8..d395da35d5e4 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -326,7 +326,7 @@ void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu, bool mirror)
}
}
-static void handle_changed_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
+static void handle_changed_spte(struct kvm *kvm, struct kvm_mmu_page *sp,
gfn_t gfn, u64 old_spte, u64 new_spte,
int level, bool shared);
@@ -458,8 +458,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte,
FROZEN_SPTE, level);
}
- handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), sptep, gfn,
- old_spte, FROZEN_SPTE, level, shared);
+ handle_changed_spte(kvm, sp, gfn, old_spte, FROZEN_SPTE, level, shared);
}
if (is_mirror_sp(sp))
@@ -471,8 +470,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
/**
* __handle_changed_spte - handle bookkeeping associated with an SPTE change
* @kvm: kvm instance
- * @as_id: the address space of the paging structure the SPTE was a part of
- * @sptep: pointer to the SPTE
+ * @sp: the page table in which the SPTE resides
* @gfn: the base GFN that was mapped by the SPTE
* @old_spte: The value of the SPTE before the change
* @new_spte: The value of the SPTE after the change
@@ -485,7 +483,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
* dirty logging updates are handled in common code, not here (see make_spte()
* and fast_pf_fix_direct_spte()).
*/
-static int __handle_changed_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
+static int __handle_changed_spte(struct kvm *kvm, struct kvm_mmu_page *sp,
gfn_t gfn, u64 old_spte, u64 new_spte,
int level, bool shared)
{
@@ -494,6 +492,7 @@ static int __handle_changed_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
bool was_leaf = was_present && is_last_spte(old_spte, level);
bool is_leaf = is_present && is_last_spte(new_spte, level);
bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
+ int as_id = kvm_mmu_page_as_id(sp);
WARN_ON_ONCE(level > PT64_ROOT_MAX_LEVEL);
WARN_ON_ONCE(level < PG_LEVEL_4K);
@@ -570,19 +569,19 @@ static int __handle_changed_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
if (was_present && !was_leaf &&
(is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
- else if (is_mirror_sptep(sptep))
+ else if (is_mirror_sp(sp))
return kvm_x86_call(set_external_spte)(kvm, gfn, old_spte,
new_spte, level);
return 0;
}
-static void handle_changed_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
+static void handle_changed_spte(struct kvm *kvm, struct kvm_mmu_page *sp,
gfn_t gfn, u64 old_spte, u64 new_spte,
int level, bool shared)
{
- KVM_BUG_ON(__handle_changed_spte(kvm, as_id, sptep, gfn, old_spte,
- new_spte, level, shared), kvm);
+ KVM_BUG_ON(__handle_changed_spte(kvm, sp, gfn, old_spte, new_spte,
+ level, shared), kvm);
}
static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm,
@@ -631,6 +630,7 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter,
u64 new_spte)
{
+ struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep));
int ret;
lockdep_assert_held_read(&kvm->mmu_lock);
@@ -652,8 +652,8 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
if (ret)
return ret;
- ret = __handle_changed_spte(kvm, iter->as_id, iter->sptep, iter->gfn,
- iter->old_spte, new_spte, iter->level, true);
+ ret = __handle_changed_spte(kvm, sp, iter->gfn, iter->old_spte,
+ new_spte, iter->level, true);
/*
* Unfreeze the mirror SPTE. If updating the external SPTE failed,
@@ -678,7 +678,6 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
/*
* tdp_mmu_set_spte - Set a TDP MMU SPTE and handle the associated bookkeeping
* @kvm: KVM instance
- * @as_id: Address space ID, i.e. regular vs. SMM
* @sptep: Pointer to the SPTE
* @old_spte: The current value of the SPTE
* @new_spte: The new value that will be set for the SPTE
@@ -691,6 +690,8 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
u64 old_spte, u64 new_spte, gfn_t gfn, int level)
{
+ struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(sptep));
+
lockdep_assert_held_write(&kvm->mmu_lock);
/*
@@ -704,7 +705,7 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
- handle_changed_spte(kvm, as_id, sptep, gfn, old_spte, new_spte, level, false);
+ handle_changed_spte(kvm, sp, gfn, old_spte, new_spte, level, false);
return old_spte;
}
base-commit: f9d48449fbf9aff6cdced4703cdfdfc1d2e49efe
--
> Passing in "bool is_mirror_sp" to __handle_changed_spte() instead?
next prev parent reply other threads:[~2026-02-10 19:52 UTC|newest]
Thread overview: 148+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-29 1:14 [RFC PATCH v5 00/45] TDX: Dynamic PAMT + S-EPT Hugepage Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 01/45] x86/tdx: Use pg_level in TDX APIs, not the TDX-Module's 0-based level Sean Christopherson
2026-01-29 17:37 ` Dave Hansen
2026-01-29 1:14 ` [RFC PATCH v5 02/45] KVM: x86/mmu: Update iter->old_spte if cmpxchg64 on mirror SPTE "fails" Sean Christopherson
2026-01-29 22:10 ` Edgecombe, Rick P
2026-01-29 22:23 ` Sean Christopherson
2026-01-29 22:48 ` Edgecombe, Rick P
2026-02-03 8:48 ` Yan Zhao
2026-02-03 10:30 ` Huang, Kai
2026-02-03 20:06 ` Sean Christopherson
2026-02-03 21:34 ` Huang, Kai
2026-01-29 1:14 ` [RFC PATCH v5 03/45] KVM: TDX: Account all non-transient page allocations for per-TD structures Sean Christopherson
2026-01-29 22:15 ` Edgecombe, Rick P
2026-02-03 10:36 ` Huang, Kai
2026-01-29 1:14 ` [RFC PATCH v5 04/45] KVM: x86: Make "external SPTE" ops that can fail RET0 static calls Sean Christopherson
2026-01-29 22:20 ` Edgecombe, Rick P
2026-01-30 1:28 ` Sean Christopherson
2026-01-30 17:32 ` Edgecombe, Rick P
2026-02-03 10:44 ` Huang, Kai
2026-02-04 1:16 ` Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 05/45] KVM: TDX: Drop kvm_x86_ops.link_external_spt(), use .set_external_spte() for all Sean Christopherson
2026-01-30 23:55 ` Edgecombe, Rick P
2026-02-03 10:19 ` Yan Zhao
2026-02-03 20:05 ` Sean Christopherson
2026-02-04 6:41 ` Yan Zhao
2026-02-05 23:14 ` Sean Christopherson
2026-02-06 2:27 ` Yan Zhao
2026-02-18 19:37 ` Edgecombe, Rick P
2026-02-20 17:36 ` Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 06/45] KVM: x86/mmu: Fold set_external_spte_present() into its sole caller Sean Christopherson
2026-02-04 7:38 ` Yan Zhao
2026-02-05 23:06 ` Sean Christopherson
2026-02-06 2:29 ` Yan Zhao
2026-01-29 1:14 ` [RFC PATCH v5 07/45] KVM: x86/mmu: Plumb the SPTE _pointer_ into the TDP MMU's handle_changed_spte() Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 08/45] KVM: x86/mmu: Propagate mirror SPTE removal to S-EPT in handle_changed_spte() Sean Christopherson
2026-02-04 9:06 ` Yan Zhao
2026-02-05 2:23 ` Sean Christopherson
2026-02-05 5:39 ` Yan Zhao
2026-02-05 22:33 ` Sean Christopherson
2026-02-06 2:17 ` Yan Zhao
2026-02-06 17:41 ` Sean Christopherson
2026-02-10 10:54 ` Yan Zhao
2026-02-10 19:52 ` Sean Christopherson [this message]
2026-02-11 2:16 ` Yan Zhao
2026-02-14 0:36 ` Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 09/45] KVM: x86: Rework .free_external_spt() into .reclaim_external_sp() Sean Christopherson
2026-02-04 9:45 ` Yan Zhao
2026-02-05 7:04 ` Yan Zhao
2026-02-05 22:38 ` Sean Christopherson
2026-02-06 2:30 ` Yan Zhao
2026-01-29 1:14 ` [RFC PATCH v5 10/45] x86/tdx: Move all TDX error defines into <asm/shared/tdx_errno.h> Sean Christopherson
2026-01-29 18:13 ` Dave Hansen
2026-01-29 1:14 ` [RFC PATCH v5 11/45] x86/tdx: Add helpers to check return status codes Sean Christopherson
2026-01-29 18:58 ` Dave Hansen
2026-01-29 20:35 ` Sean Christopherson
2026-01-30 0:36 ` Edgecombe, Rick P
2026-02-03 20:32 ` Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 12/45] x86/virt/tdx: Simplify tdmr_get_pamt_sz() Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 13/45] x86/virt/tdx: Allocate page bitmap for Dynamic PAMT Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 14/45] x86/virt/tdx: Allocate reference counters for PAMT memory Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 15/45] x86/virt/tdx: Improve PAMT refcounts allocation for sparse memory Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 16/45] x86/virt/tdx: Add tdx_alloc/free_control_page() helpers Sean Christopherson
2026-01-30 1:30 ` Sean Christopherson
2026-02-05 6:11 ` Yan Zhao
2026-02-05 22:35 ` Sean Christopherson
2026-02-06 2:32 ` Yan Zhao
2026-02-10 17:44 ` Dave Hansen
2026-02-10 22:15 ` Edgecombe, Rick P
2026-02-10 22:19 ` Dave Hansen
2026-02-10 22:46 ` Huang, Kai
2026-02-10 22:50 ` Dave Hansen
2026-02-10 23:02 ` Huang, Kai
2026-02-11 0:50 ` Edgecombe, Rick P
2026-01-29 1:14 ` [RFC PATCH v5 17/45] x86/virt/tdx: Optimize " Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 18/45] KVM: TDX: Allocate PAMT memory for TD and vCPU control structures Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 19/45] KVM: Allow owner of kvm_mmu_memory_cache to provide a custom page allocator Sean Christopherson
2026-02-03 10:56 ` Huang, Kai
2026-02-03 20:12 ` Sean Christopherson
2026-02-03 20:33 ` Edgecombe, Rick P
2026-02-03 21:17 ` Sean Christopherson
2026-02-03 21:29 ` Huang, Kai
2026-02-04 2:16 ` Sean Christopherson
2026-02-04 6:45 ` Huang, Kai
2026-01-29 1:14 ` [RFC PATCH v5 20/45] KVM: x86/mmu: Allocate/free S-EPT pages using tdx_{alloc,free}_control_page() Sean Christopherson
2026-02-03 11:16 ` Huang, Kai
2026-02-03 20:17 ` Sean Christopherson
2026-02-03 21:18 ` Huang, Kai
2026-02-06 9:48 ` Yan Zhao
2026-02-06 15:01 ` Sean Christopherson
2026-02-09 9:25 ` Yan Zhao
2026-02-09 23:20 ` Sean Christopherson
2026-02-10 8:30 ` Yan Zhao
2026-02-10 0:07 ` Dave Hansen
2026-02-10 1:40 ` Yan Zhao
2026-02-09 10:41 ` Huang, Kai
2026-02-09 22:44 ` Sean Christopherson
2026-02-10 10:54 ` Huang, Kai
2026-02-09 23:40 ` Dave Hansen
2026-02-10 0:03 ` Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 21/45] x86/tdx: Add APIs to support get/put of DPAMT entries from KVM, under spinlock Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 22/45] KVM: TDX: Get/put PAMT pages when (un)mapping private memory Sean Christopherson
2026-02-06 10:20 ` Yan Zhao
2026-02-06 16:03 ` Sean Christopherson
2026-02-06 19:27 ` Edgecombe, Rick P
2026-02-06 23:18 ` Sean Christopherson
2026-02-06 23:19 ` Edgecombe, Rick P
2026-02-09 10:33 ` Huang, Kai
2026-02-09 17:08 ` Edgecombe, Rick P
2026-02-09 21:05 ` Huang, Kai
2026-01-29 1:14 ` [RFC PATCH v5 23/45] x86/virt/tdx: Enable Dynamic PAMT Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 24/45] Documentation/x86: Add documentation for TDX's " Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 25/45] *** DO NOT MERGE *** x86/virt/tdx: Don't assume guest memory is backed by struct page Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 26/45] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages Sean Christopherson
2026-01-29 1:14 ` [RFC PATCH v5 27/45] x86/virt/tdx: Enhance tdh_phymem_page_wbinvd_hkid() to invalidate " Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 28/45] x86/virt/tdx: Extend "reset page" quirk to support " Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 29/45] x86/virt/tdx: Get/Put DPAMT page pair if and only if mapping size is 4KB Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 30/45] x86/virt/tdx: Add API to demote a 2MB mapping to 512 4KB mappings Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 31/45] KVM: x86/mmu: Prevent hugepage promotion for mirror roots in fault path Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 32/45] KVM: x86/mmu: Plumb the old_spte into kvm_x86_ops.set_external_spte() Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 33/45] KVM: TDX: Hoist tdx_sept_remove_private_spte() above set_private_spte() Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 34/45] KVM: TDX: Handle removal of leaf SPTEs in .set_private_spte() Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 35/45] KVM: TDX: Add helper to handle mapping leaf SPTE into S-EPT Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 36/45] KVM: TDX: Move S-EPT page demotion TODO to tdx_sept_set_private_spte() Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 37/45] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Sean Christopherson
2026-02-06 10:07 ` Yan Zhao
2026-02-06 16:09 ` Sean Christopherson
2026-02-11 9:49 ` Yan Zhao
2026-01-29 1:15 ` [RFC PATCH v5 38/45] KVM: x86/mmu: Add Dynamic PAMT support in TDP MMU for vCPU-induced page split Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 39/45] KVM: TDX: Add core support for splitting/demoting 2MiB S-EPT to 4KiB Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 40/45] KVM: x86: Introduce hugepage_set_guest_inhibit() Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 41/45] KVM: TDX: Honor the guest's accept level contained in an EPT violation Sean Christopherson
2026-01-29 15:32 ` Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 42/45] KVM: guest_memfd: Add helpers to get start/end gfns give gmem+slot+pgoff Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 43/45] *** DO NOT MERGE *** KVM: guest_memfd: Add pre-zap arch hook for shared<=>private conversion Sean Christopherson
2026-02-13 7:23 ` Huang, Kai
2026-01-29 1:15 ` [RFC PATCH v5 44/45] KVM: x86/mmu: Add support for splitting S-EPT hugepages on conversion Sean Christopherson
2026-01-29 15:39 ` Sean Christopherson
2026-02-11 8:43 ` Yan Zhao
2026-02-13 15:09 ` Sean Christopherson
2026-02-06 10:14 ` Yan Zhao
2026-02-06 14:46 ` Sean Christopherson
2026-01-29 1:15 ` [RFC PATCH v5 45/45] KVM: TDX: Turn on PG_LEVEL_2M Sean Christopherson
2026-01-29 17:13 ` [RFC PATCH v5 00/45] TDX: Dynamic PAMT + S-EPT Hugepage Konrad Rzeszutek Wilk
2026-01-29 17:17 ` Dave Hansen
2026-02-04 14:38 ` Sean Christopherson
2026-02-04 15:09 ` Dave Hansen
2026-02-05 15:53 ` Sean Christopherson
2026-02-05 16:01 ` Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aYuMaRbVQyUfYJTP@google.com \
--to=seanjc@google.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=isaku.yamahata@intel.com \
--cc=kai.huang@intel.com \
--cc=kas@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=rick.p.edgecombe@intel.com \
--cc=sagis@google.com \
--cc=tglx@kernel.org \
--cc=vannapurve@google.com \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox