From: Yan Zhao <yan.y.zhao@intel.com>
To: pbonzini@redhat.com, seanjc@google.com
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
x86@kernel.org, rick.p.edgecombe@intel.com,
dave.hansen@intel.com, kas@kernel.org, tabba@google.com,
ackerleytng@google.com, quic_eberman@quicinc.com,
michael.roth@amd.com, david@redhat.com, vannapurve@google.com,
vbabka@suse.cz, thomas.lendacky@amd.com, pgonda@google.com,
zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com,
ira.weiny@intel.com, isaku.yamahata@intel.com,
xiaoyao.li@intel.com, binbin.wu@linux.intel.com,
chao.p.peng@intel.com, yan.y.zhao@intel.com
Subject: [RFC PATCH v2 23/23] KVM: TDX: Turn on PG_LEVEL_2M after TD is RUNNABLE
Date: Thu, 7 Aug 2025 17:46:28 +0800 [thread overview]
Message-ID: <20250807094628.4790-1-yan.y.zhao@intel.com> (raw)
In-Reply-To: <20250807093950.4395-1-yan.y.zhao@intel.com>
Turn on PG_LEVEL_2M in tdx_gmem_private_max_mapping_level() when TD is
RUNNABLE.
Update the warnings and KVM_BUG_ON() info elsewhere to match that 2MB
mappings are permitted after TD is RUNNABLE.
Opportunistically, remove the unused params "gfn" and "pfn" in
tdx_mem_page_record_premap_cnt().
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
RFC v2:
- Merged RFC v1's patch 4 (forcing PG_LEVEL_4K before TD runnable) with
patch 9 (allowing PG_LEVEL_2M after TD runnable).
---
arch/x86/kvm/vmx/tdx.c | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 6e061d659639..a3e1ac044ee9 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1633,12 +1633,11 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn,
* The counter has to be zero on KVM_TDX_FINALIZE_VM, to ensure that there
* are no half-initialized shared EPT pages.
*/
-static int tdx_mem_page_record_premap_cnt(struct kvm *kvm, gfn_t gfn,
- enum pg_level level, kvm_pfn_t pfn)
+static int tdx_mem_page_record_premap_cnt(struct kvm *kvm, enum pg_level level)
{
struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
- if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm))
+ if (KVM_BUG_ON(kvm->arch.pre_fault_allowed || level != PG_LEVEL_4K, kvm))
return -EINVAL;
/* nr_premapped will be decreased when tdh_mem_page_add() is called. */
@@ -1667,10 +1666,6 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
if (ret)
return ret;
- /* TODO: handle large pages. */
- if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
- return -EINVAL;
-
/*
* Read 'pre_fault_allowed' before 'kvm_tdx->state'; see matching
* barrier in tdx_td_finalize().
@@ -1680,7 +1675,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
if (likely(kvm_tdx->state == TD_STATE_RUNNABLE))
ret = tdx_mem_page_aug(kvm, gfn, level, page);
else
- ret = tdx_mem_page_record_premap_cnt(kvm, gfn, level, pfn);
+ ret = tdx_mem_page_record_premap_cnt(kvm, level);
if (ret)
tdx_pamt_put(page, level);
@@ -1697,8 +1692,8 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
gpa_t gpa = gfn_to_gpa(gfn);
u64 err, entry, level_state;
- /* TODO: handle large pages. */
- if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
+ /* Large page is not supported before TD runnable,*/
+ if (KVM_BUG_ON(kvm_tdx->state != TD_STATE_RUNNABLE && level != PG_LEVEL_4K, kvm))
return -EINVAL;
if (KVM_BUG_ON(!is_hkid_assigned(kvm_tdx), kvm))
@@ -1791,7 +1786,7 @@ static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn,
static int tdx_is_sept_zap_err_due_to_premap(struct kvm_tdx *kvm_tdx, u64 err,
u64 entry, int level)
{
- if (!err || kvm_tdx->state == TD_STATE_RUNNABLE)
+ if (!err || kvm_tdx->state == TD_STATE_RUNNABLE || level > PG_LEVEL_4K)
return false;
if (err != (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))
@@ -1811,8 +1806,8 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
u64 err, entry, level_state;
- /* For now large page isn't supported yet. */
- WARN_ON_ONCE(level != PG_LEVEL_4K);
+ /* Large page is not supported before TD runnable,*/
+ WARN_ON_ONCE(kvm_tdx->state != TD_STATE_RUNNABLE && level != PG_LEVEL_4K);
err = tdh_mem_range_block(&kvm_tdx->td, gpa, tdx_level, &entry, &level_state);
@@ -1993,6 +1988,9 @@ static int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn,
struct folio *folio = page_folio(page);
int ret;
+ WARN_ON_ONCE(folio_page_idx(folio, page) + KVM_PAGES_PER_HPAGE(level) >
+ folio_nr_pages(folio));
+
if (!is_hkid_assigned(to_kvm_tdx(kvm))) {
KVM_BUG_ON(!kvm->vm_dead, kvm);
@@ -3470,7 +3468,10 @@ int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
int tdx_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn)
{
- return PG_LEVEL_4K;
+ if (unlikely(to_kvm_tdx(kvm)->state != TD_STATE_RUNNABLE))
+ return PG_LEVEL_4K;
+
+ return PG_LEVEL_2M;
}
static int tdx_online_cpu(unsigned int cpu)
--
2.43.2
prev parent reply other threads:[~2025-08-07 9:47 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-07 9:39 [RFC PATCH v2 00/23] KVM: TDX huge page support for private memory Yan Zhao
2025-08-07 9:41 ` [RFC PATCH v2 01/23] x86/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-08-07 9:41 ` [RFC PATCH v2 02/23] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-09-01 8:55 ` Binbin Wu
2025-09-01 9:08 ` Yan Zhao
2025-09-02 16:56 ` Edgecombe, Rick P
2025-09-02 17:37 ` Sean Christopherson
2025-09-02 17:45 ` Edgecombe, Rick P
2025-09-04 9:31 ` Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 03/23] x86/tdx: Enhance tdh_phymem_page_wbinvd_hkid() to invalidate huge pages Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 04/23] KVM: TDX: Introduce tdx_clear_folio() to clear " Yan Zhao
2025-09-02 2:56 ` Binbin Wu
2025-09-03 9:51 ` Yan Zhao
2025-09-03 11:19 ` Binbin Wu
2025-09-04 2:53 ` Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 05/23] x86/tdx: Enhance tdh_phymem_page_reclaim() to support " Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 06/23] KVM: TDX: Do not hold page refcount on private guest pages Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 07/23] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 08/23] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 09/23] KVM: x86/tdp_mmu: Add split_external_spt hook called during write mmu_lock Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 10/23] KVM: TDX: Enable huge page splitting under write kvm->mmu_lock Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 11/23] KVM: x86: Reject splitting huge pages under shared mmu_lock for mirror root Yan Zhao
2025-09-03 3:30 ` Binbin Wu
2025-08-07 9:43 ` [RFC PATCH v2 12/23] KVM: x86/mmu: Introduce kvm_split_cross_boundary_leafs() Yan Zhao
2025-09-03 6:57 ` Binbin Wu
2025-09-03 9:44 ` Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 13/23] KVM: x86: Introduce hugepage_set_guest_inhibit() Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 14/23] KVM: TDX: Split and inhibit huge mappings if a VMExit carries level info Yan Zhao
2025-09-03 7:36 ` Binbin Wu
2025-09-03 9:37 ` Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 15/23] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 16/23] KVM: x86: Split cross-boundary mirror leafs for KVM_SET_MEMORY_ATTRIBUTES Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 17/23] KVM: guest_memfd: Split for punch hole and private-to-shared conversion Yan Zhao
2025-09-04 7:58 ` Binbin Wu
2025-09-04 9:48 ` Yan Zhao
2025-09-04 11:07 ` Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 18/23] x86/virt/tdx: Do not perform cache flushes unless CLFLUSH_BEFORE_ALLOC is set Yan Zhao
2025-08-11 21:10 ` Sagi Shahar
2025-08-12 6:37 ` Yan Zhao
2025-09-04 8:16 ` Binbin Wu
2025-09-04 9:50 ` Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 19/23] KVM: TDX: Pass down pfn to split_external_spt() Yan Zhao
2025-09-04 8:30 ` Binbin Wu
2025-08-07 9:45 ` [RFC PATCH v2 20/23] KVM: TDX: Handle Dynamic PAMT in tdh_mem_page_demote() Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 21/23] KVM: TDX: Preallocate PAMT pages to be used in split path Yan Zhao
2025-09-04 9:17 ` Binbin Wu
2025-09-04 9:58 ` Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 22/23] KVM: TDX: Handle Dynamic PAMT on page split Yan Zhao
2025-08-14 5:31 ` Vishal Annapurve
2025-08-14 18:29 ` Vishal Annapurve
2025-08-18 4:19 ` Yan Zhao
2025-08-07 9:46 ` Yan Zhao [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250807094628.4790-1-yan.y.zhao@intel.com \
--to=yan.y.zhao@intel.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jun.miao@intel.com \
--cc=kas@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).