From: Binbin Wu <binbin.wu@linux.intel.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: pbonzini@redhat.com, seanjc@google.com,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
x86@kernel.org, rick.p.edgecombe@intel.com,
dave.hansen@intel.com, kas@kernel.org, tabba@google.com,
ackerleytng@google.com, quic_eberman@quicinc.com,
michael.roth@amd.com, david@redhat.com, vannapurve@google.com,
vbabka@suse.cz, thomas.lendacky@amd.com, pgonda@google.com,
zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com,
ira.weiny@intel.com, isaku.yamahata@intel.com,
xiaoyao.li@intel.com, chao.p.peng@intel.com
Subject: Re: [RFC PATCH v2 04/23] KVM: TDX: Introduce tdx_clear_folio() to clear huge pages
Date: Tue, 2 Sep 2025 10:56:25 +0800 [thread overview]
Message-ID: <04d6d306-b495-428f-ac3a-44057fd6ccfc@linux.intel.com> (raw)
In-Reply-To: <20250807094214.4495-1-yan.y.zhao@intel.com>
On 8/7/2025 5:42 PM, Yan Zhao wrote:
> After removing or reclaiming a guest private page or a control page from a
> TD, zero the physical page using movdir64b(), enabling the kernel to reuse
> the pages.
>
> Introduce the function tdx_clear_folio() to zero out physical memory using
> movdir64b(), starting from the page at "start_idx" within a "folio" and
> spanning "npages" contiguous PFNs.
>
> Convert tdx_clear_page() to be a helper function to facilitate the
> zeroing of 4KB pages.
I think this sentence is outdated?
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
> ---
> RFC v2:
> - Add tdx_clear_folio().
> - Drop inner loop _tdx_clear_page() and move __mb() outside of the loop.
> (Rick)
> - Use C99-style definition of variables inside a for loop.
> - Note: [1] also changes tdx_clear_page(). RFC v2 is not based on [1] now.
>
> [1] https://lore.kernel.org/all/20250724130354.79392-2-adrian.hunter@intel.com
>
> RFC v1:
> - split out, let tdx_clear_page() accept level.
> ---
> arch/x86/kvm/vmx/tdx.c | 22 ++++++++++++++++------
> 1 file changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 8eaf8431c5f1..4fabefb27135 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -277,18 +277,21 @@ static inline void tdx_disassociate_vp(struct kvm_vcpu *vcpu)
> vcpu->cpu = -1;
> }
>
> -static void tdx_clear_page(struct page *page)
> +static void tdx_clear_folio(struct folio *folio, unsigned long start_idx,
> + unsigned long npages)
> {
> const void *zero_page = (const void *) page_to_virt(ZERO_PAGE(0));
> - void *dest = page_to_virt(page);
> - unsigned long i;
>
> /*
> * The page could have been poisoned. MOVDIR64B also clears
> * the poison bit so the kernel can safely use the page again.
> */
> - for (i = 0; i < PAGE_SIZE; i += 64)
> - movdir64b(dest + i, zero_page);
> + for (unsigned long j = 0; j < npages; j++) {
> + void *dest = page_to_virt(folio_page(folio, start_idx + j));
> +
> + for (unsigned long i = 0; i < PAGE_SIZE; i += 64)
> + movdir64b(dest + i, zero_page);
> + }
> /*
> * MOVDIR64B store uses WC buffer. Prevent following memory reads
> * from seeing potentially poisoned cache.
> @@ -296,6 +299,13 @@ static void tdx_clear_page(struct page *page)
> __mb();
> }
>
> +static inline void tdx_clear_page(struct page *page)
No need to tag a local static function with "inline".
> +{
> + struct folio *folio = page_folio(page);
> +
> + tdx_clear_folio(folio, folio_page_idx(folio, page), 1);
This is strange at my first thought.
And then I realized that it is to avoid unnecessary memory barrier.
No better idea so far.
> +}
> +
> static void tdx_no_vcpus_enter_start(struct kvm *kvm)
> {
> struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> @@ -1736,7 +1746,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
> pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err);
> return -EIO;
> }
> - tdx_clear_page(page);
> + tdx_clear_folio(folio, folio_page_idx(folio, page), KVM_PAGES_PER_HPAGE(level));
> tdx_pamt_put(page, level);
> tdx_unpin(kvm, page);
> return 0;
next prev parent reply other threads:[~2025-09-02 2:56 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-07 9:39 [RFC PATCH v2 00/23] KVM: TDX huge page support for private memory Yan Zhao
2025-08-07 9:41 ` [RFC PATCH v2 01/23] x86/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-08-07 9:41 ` [RFC PATCH v2 02/23] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-09-01 8:55 ` Binbin Wu
2025-09-01 9:08 ` Yan Zhao
2025-09-02 16:56 ` Edgecombe, Rick P
2025-09-02 17:37 ` Sean Christopherson
2025-09-02 17:45 ` Edgecombe, Rick P
2025-09-04 9:31 ` Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 03/23] x86/tdx: Enhance tdh_phymem_page_wbinvd_hkid() to invalidate huge pages Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 04/23] KVM: TDX: Introduce tdx_clear_folio() to clear " Yan Zhao
2025-09-02 2:56 ` Binbin Wu [this message]
2025-09-03 9:51 ` Yan Zhao
2025-09-03 11:19 ` Binbin Wu
2025-09-04 2:53 ` Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 05/23] x86/tdx: Enhance tdh_phymem_page_reclaim() to support " Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 06/23] KVM: TDX: Do not hold page refcount on private guest pages Yan Zhao
2025-08-07 9:42 ` [RFC PATCH v2 07/23] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 08/23] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 09/23] KVM: x86/tdp_mmu: Add split_external_spt hook called during write mmu_lock Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 10/23] KVM: TDX: Enable huge page splitting under write kvm->mmu_lock Yan Zhao
2025-08-07 9:43 ` [RFC PATCH v2 11/23] KVM: x86: Reject splitting huge pages under shared mmu_lock for mirror root Yan Zhao
2025-09-03 3:30 ` Binbin Wu
2025-08-07 9:43 ` [RFC PATCH v2 12/23] KVM: x86/mmu: Introduce kvm_split_cross_boundary_leafs() Yan Zhao
2025-09-03 6:57 ` Binbin Wu
2025-09-03 9:44 ` Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 13/23] KVM: x86: Introduce hugepage_set_guest_inhibit() Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 14/23] KVM: TDX: Split and inhibit huge mappings if a VMExit carries level info Yan Zhao
2025-09-03 7:36 ` Binbin Wu
2025-09-03 9:37 ` Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 15/23] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-08-07 9:44 ` [RFC PATCH v2 16/23] KVM: x86: Split cross-boundary mirror leafs for KVM_SET_MEMORY_ATTRIBUTES Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 17/23] KVM: guest_memfd: Split for punch hole and private-to-shared conversion Yan Zhao
2025-09-04 7:58 ` Binbin Wu
2025-09-04 9:48 ` Yan Zhao
2025-09-04 11:07 ` Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 18/23] x86/virt/tdx: Do not perform cache flushes unless CLFLUSH_BEFORE_ALLOC is set Yan Zhao
2025-08-11 21:10 ` Sagi Shahar
2025-08-12 6:37 ` Yan Zhao
2025-09-04 8:16 ` Binbin Wu
2025-09-04 9:50 ` Yan Zhao
2025-08-07 9:45 ` [RFC PATCH v2 19/23] KVM: TDX: Pass down pfn to split_external_spt() Yan Zhao
2025-09-04 8:30 ` Binbin Wu
2025-08-07 9:45 ` [RFC PATCH v2 20/23] KVM: TDX: Handle Dynamic PAMT in tdh_mem_page_demote() Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 21/23] KVM: TDX: Preallocate PAMT pages to be used in split path Yan Zhao
2025-09-04 9:17 ` Binbin Wu
2025-09-04 9:58 ` Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 22/23] KVM: TDX: Handle Dynamic PAMT on page split Yan Zhao
2025-08-14 5:31 ` Vishal Annapurve
2025-08-14 18:29 ` Vishal Annapurve
2025-08-18 4:19 ` Yan Zhao
2025-08-07 9:46 ` [RFC PATCH v2 23/23] KVM: TDX: Turn on PG_LEVEL_2M after TD is RUNNABLE Yan Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=04d6d306-b495-428f-ac3a-44057fd6ccfc@linux.intel.com \
--to=binbin.wu@linux.intel.com \
--cc=ackerleytng@google.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jun.miao@intel.com \
--cc=kas@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).