From: Yan Zhao <yan.y.zhao@intel.com>
To: pbonzini@redhat.com, seanjc@google.com
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
x86@kernel.org, rick.p.edgecombe@intel.com,
dave.hansen@intel.com, kas@kernel.org, tabba@google.com,
ackerleytng@google.com, quic_eberman@quicinc.com,
michael.roth@amd.com, david@redhat.com, vannapurve@google.com,
vbabka@suse.cz, thomas.lendacky@amd.com, pgonda@google.com,
zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com,
ira.weiny@intel.com, isaku.yamahata@intel.com,
xiaoyao.li@intel.com, binbin.wu@linux.intel.com,
chao.p.peng@intel.com, yan.y.zhao@intel.com
Subject: [RFC PATCH v2 18/23] x86/virt/tdx: Do not perform cache flushes unless CLFLUSH_BEFORE_ALLOC is set
Date: Thu, 7 Aug 2025 17:45:16 +0800 [thread overview]
Message-ID: <20250807094516.4705-1-yan.y.zhao@intel.com> (raw)
In-Reply-To: <20250807093950.4395-1-yan.y.zhao@intel.com>
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
The TDX module enumerates with a TDX_FEATURES0 bit if an explicit cache
flush is necessary when switching KeyID for a page, like before
handing the page over to a TD.
Currently, none of the TDX-capable platforms have this bit enabled.
Moreover, cache flushing with TDH.PHYMEM.PAGE.WBINVD fails if
Dynamic PAMT is active and the target page is not 4k. The SEAMCALL only
supports 4k pages and will fail if there is no PAMT_4K for the HPA.
Avoid performing these cache flushes unless the CLFLUSH_BEFORE_ALLOC bit
of TDX_FEATURES0 is set.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
RFC v2:
- Pulled from
git://git.kernel.org/pub/scm/linux/kernel/git/kas/linux.git tdx/dpamt-huge.
- Rebased on top of TDX huge page RFC v2 (Yan)
---
arch/x86/include/asm/tdx.h | 1 +
arch/x86/virt/vmx/tdx/tdx.c | 19 +++++++++++++------
2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index f1bd74348b34..c058a82d4a97 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -15,6 +15,7 @@
/* Bit definitions of TDX_FEATURES0 metadata field */
#define TDX_FEATURES0_NO_RBP_MOD BIT_ULL(18)
+#define TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC BIT_ULL(23)
#define TDX_FEATURES0_DYNAMIC_PAMT BIT_ULL(36)
#ifndef __ASSEMBLER__
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 9ed585bde062..b7a0ee0f4a50 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1648,14 +1648,13 @@ static inline u64 tdx_tdvpr_pa(struct tdx_vp *td)
return page_to_phys(td->tdvpr_page);
}
-/*
- * The TDX module exposes a CLFLUSH_BEFORE_ALLOC bit to specify whether
- * a CLFLUSH of pages is required before handing them to the TDX module.
- * Be conservative and make the code simpler by doing the CLFLUSH
- * unconditionally.
- */
static void tdx_clflush_page(struct page *page)
{
+ u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
+
+ if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
+ return;
+
clflush_cache_range(page_to_virt(page), PAGE_SIZE);
}
@@ -2030,8 +2029,12 @@ EXPORT_SYMBOL_GPL(tdh_phymem_cache_wb);
u64 tdh_phymem_page_wbinvd_tdr(struct tdx_td *td)
{
+ u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
struct tdx_module_args args = {};
+ if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
+ return 0;
+
args.rcx = mk_keyed_paddr(tdx_global_keyid, td->tdr_page);
return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args);
@@ -2041,10 +2044,14 @@ EXPORT_SYMBOL_GPL(tdh_phymem_page_wbinvd_tdr);
u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct folio *folio,
unsigned long start_idx, unsigned long npages)
{
+ u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
struct page *start = folio_page(folio, start_idx);
struct tdx_module_args args = {};
u64 err;
+ if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
+ return 0;
+
if (start_idx + npages > folio_nr_pages(folio))
return TDX_OPERAND_INVALID;
--
2.43.2
next prev parent reply other threads:[~2025-08-07 9:45 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-07 9:39 [RFC PATCH v2 00/23] KVM: TDX huge page support for private memory Yan Zhao
2025-08-07 9:41 ` [RFC PATCH v2 01/23] x86/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-08-07 9:41 ` [RFC PATCH v2 02/23] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-09-01 8:55 ` Binbin Wu
2025-09-01 9:08 ` Yan Zhao
2025-09-02 16:56 ` Edgecombe, Rick P
2025-09-02 17:37 ` Sean Christopherson
2025-09-02 17:45 ` Edgecombe, Rick P
2025-09-04 9:31 ` Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 03/23] x86/tdx: Enhance tdh_phymem_page_wbinvd_hkid() to invalidate huge pages Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 04/23] KVM: TDX: Introduce tdx_clear_folio() to clear " Yan Zhao
2025-09-02 2:56 ` Binbin Wu
2025-09-03 9:51 ` Yan Zhao
2025-09-03 11:19 ` Binbin Wu
2025-09-04 2:53 ` Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 05/23] x86/tdx: Enhance tdh_phymem_page_reclaim() to support " Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 06/23] KVM: TDX: Do not hold page refcount on private guest pages Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 07/23] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 08/23] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 09/23] KVM: x86/tdp_mmu: Add split_external_spt hook called during write mmu_lock Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 10/23] KVM: TDX: Enable huge page splitting under write kvm->mmu_lock Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 11/23] KVM: x86: Reject splitting huge pages under shared mmu_lock for mirror root Yan Zhao
2025-09-03 3:30 ` Binbin Wu
2025-08-07 9:43 ` [RFC PATCH v2 12/23] KVM: x86/mmu: Introduce kvm_split_cross_boundary_leafs() Yan Zhao
2025-09-03 6:57 ` Binbin Wu
2025-09-03 9:44 ` Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 13/23] KVM: x86: Introduce hugepage_set_guest_inhibit() Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 14/23] KVM: TDX: Split and inhibit huge mappings if a VMExit carries level info Yan Zhao
2025-09-03 7:36 ` Binbin Wu
2025-09-03 9:37 ` Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 15/23] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 16/23] KVM: x86: Split cross-boundary mirror leafs for KVM_SET_MEMORY_ATTRIBUTES Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 17/23] KVM: guest_memfd: Split for punch hole and private-to-shared conversion Yan Zhao
2025-09-04 7:58 ` Binbin Wu
2025-09-04 9:48 ` Yan Zhao
2025-09-04 11:07 ` Yan Zhao
2025-08-07 9:45 ` Yan Zhao [this message]
2025-08-11 21:10 ` [RFC PATCH v2 18/23] x86/virt/tdx: Do not perform cache flushes unless CLFLUSH_BEFORE_ALLOC is set Sagi Shahar
2025-08-12 6:37 ` Yan Zhao
2025-09-04 8:16 ` Binbin Wu
2025-09-04 9:50 ` Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 19/23] KVM: TDX: Pass down pfn to split_external_spt() Yan Zhao
2025-09-04 8:30 ` Binbin Wu
2025-08-07 9:45 ` [RFC PATCH v2 20/23] KVM: TDX: Handle Dynamic PAMT in tdh_mem_page_demote() Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 21/23] KVM: TDX: Preallocate PAMT pages to be used in split path Yan Zhao
2025-09-04 9:17 ` Binbin Wu
2025-09-04 9:58 ` Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 22/23] KVM: TDX: Handle Dynamic PAMT on page split Yan Zhao
2025-08-14 5:31 ` Vishal Annapurve
2025-08-14 18:29 ` Vishal Annapurve
2025-08-18 4:19 ` Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 23/23] KVM: TDX: Turn on PG_LEVEL_2M after TD is RUNNABLE Yan Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250807094516.4705-1-yan.y.zhao@intel.com \
--to=yan.y.zhao@intel.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jun.miao@intel.com \
--cc=kas@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).