From: Vishal Annapurve <vannapurve@google.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: pbonzini@redhat.com, seanjc@google.com,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
x86@kernel.org, rick.p.edgecombe@intel.com,
dave.hansen@intel.com, kirill.shutemov@intel.com,
tabba@google.com, ackerleytng@google.com,
quic_eberman@quicinc.com, michael.roth@amd.com,
david@redhat.com, vbabka@suse.cz, jroedel@suse.de,
thomas.lendacky@amd.com, pgonda@google.com,
zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com,
ira.weiny@intel.com, isaku.yamahata@intel.com,
xiaoyao.li@intel.com, binbin.wu@linux.intel.com,
chao.p.peng@intel.com
Subject: Re: [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages
Date: Wed, 7 May 2025 07:56:08 -0700 [thread overview]
Message-ID: <CAGtprH-+Bo4hFxL+THiMgF5V4imdVVb0OmRhx2Uc0eom9=3JPA@mail.gmail.com> (raw)
In-Reply-To: <aBsNsZsWuVl4uo0j@yzhao56-desk.sh.intel.com>
On Wed, May 7, 2025 at 12:39 AM Yan Zhao <yan.y.zhao@intel.com> wrote:
>
> On Tue, May 06, 2025 at 06:18:55AM -0700, Vishal Annapurve wrote:
> > On Mon, May 5, 2025 at 11:07 PM Yan Zhao <yan.y.zhao@intel.com> wrote:
> > >
> > > On Mon, May 05, 2025 at 10:08:24PM -0700, Vishal Annapurve wrote:
> > > > On Mon, May 5, 2025 at 5:56 PM Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > > >
> > > > > Sorry for the late reply, I was on leave last week.
> > > > >
> > > > > On Tue, Apr 29, 2025 at 06:46:59AM -0700, Vishal Annapurve wrote:
> > > > > > On Mon, Apr 28, 2025 at 5:52 PM Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > > > > > So, we plan to remove folio_ref_add()/folio_put_refs() in future, only invoking
> > > > > > > folio_ref_add() in the event of a removal failure.
> > > > > >
> > > > > > In my opinion, the above scheme can be deployed with this series
> > > > > > itself. guest_memfd will not take away memory from TDX VMs without an
> > > > > I initially intended to add a separate patch at the end of this series to
> > > > > implement invoking folio_ref_add() only upon a removal failure. However, I
> > > > > decided against it since it's not a must before guest_memfd supports in-place
> > > > > conversion.
> > > > >
> > > > > We can include it in the next version If you think it's better.
> > > >
> > > > Ackerley is planning to send out a series for 1G Hugetlb support with
> > > > guest memfd soon, hopefully this week. Plus I don't see any reason to
> > > > hold extra refcounts in TDX stack so it would be good to clean up this
> > > > logic.
> > > >
> > > > >
> > > > > > invalidation. folio_ref_add() will not work for memory not backed by
> > > > > > page structs, but that problem can be solved in future possibly by
> > > > > With current TDX code, all memory must be backed by a page struct.
> > > > > Both tdh_mem_page_add() and tdh_mem_page_aug() require a "struct page *" rather
> > > > > than a pfn.
> > > > >
> > > > > > notifying guest_memfd of certain ranges being in use even after
> > > > > > invalidation completes.
> > > > > A curious question:
> > > > > To support memory not backed by page structs in future, is there any counterpart
> > > > > to the page struct to hold ref count and map count?
> > > > >
> > > >
> > > > I imagine the needed support will match similar semantics as VM_PFNMAP
> > > > [1] memory. No need to maintain refcounts/map counts for such physical
> > > > memory ranges as all users will be notified when mappings are
> > > > changed/removed.
> > > So, it's possible to map such memory in both shared and private EPT
> > > simultaneously?
> >
> > No, guest_memfd will still ensure that userspace can only fault in
> > shared memory regions in order to support CoCo VM usecases.
> Before guest_memfd converts a PFN from shared to private, how does it ensure
> there are no shared mappings? e.g., in [1], it uses the folio reference count
> to ensure that.
>
> Or do you believe that by eliminating the struct page, there would be no
> GUP, thereby ensuring no shared mappings by requiring all mappers to unmap in
> response to a guest_memfd invalidation notification?
Yes.
>
> As in Documentation/core-api/pin_user_pages.rst, long-term pinning users have
> no need to register mmu notifier. So why users like VFIO must register
> guest_memfd invalidation notification?
VM_PFNMAP'd memory can't be long term pinned, so users of such memory
ranges will have to adopt mechanisms to get notified. I think it would
be easy to pursue new users of guest_memfd to follow this scheme.
Irrespective of whether VM_PFNMAP'd support lands, guest_memfd
hugepage support already needs the stance of: "Guest memfd owns all
long-term refcounts on private memory" as discussed at LPC [1].
[1] https://lpc.events/event/18/contributions/1764/attachments/1409/3182/LPC%202024_%201G%20page%20support%20for%20guest_memfd.pdf
(slide 12)
>
> Besides, how would guest_memfd handle potential unmap failures? e.g. what
> happens to prevent converting a private PFN to shared if there are errors when
> TDX unmaps a private PFN or if a device refuses to stop DMAing to a PFN.
Users will have to signal such failures via the invalidation callback
results or other appropriate mechanisms. guest_memfd can relay the
failures up the call chain to the userspace.
>
> Currently, guest_memfd can rely on page ref count to avoid re-assigning a PFN
> that fails to be unmapped.
>
>
> [1] https://lore.kernel.org/all/20250328153133.3504118-5-tabba@google.com/
>
>
> > >
> > >
> > > > Any guest_memfd range updates will result in invalidations/updates of
> > > > userspace, guest, IOMMU or any other page tables referring to
> > > > guest_memfd backed pfns. This story will become clearer once the
> > > > support for PFN range allocator for backing guest_memfd starts getting
> > > > discussed.
> > > Ok. It is indeed unclear right now to support such kind of memory.
> > >
> > > Up to now, we don't anticipate TDX will allow any mapping of VM_PFNMAP memory
> > > into private EPT until TDX connect.
> >
> > There is a plan to use VM_PFNMAP memory for all of guest_memfd
> > shared/private ranges orthogonal to TDX connect usecase. With TDX
> > connect/Sev TIO, major difference would be that guest_memfd private
> > ranges will be mapped into IOMMU page tables.
> >
> > Irrespective of whether/when VM_PFNMAP memory support lands, there
> > have been discussions on not using page structs for private memory
> > ranges altogether [1] even with hugetlb allocator, which will simplify
> > seamless merge/split story for private hugepages to support memory
> > conversion. So I think the general direction we should head towards is
> > not relying on refcounts for guest_memfd private ranges and/or page
> > structs altogether.
> It's fine to use PFN, but I wonder if there're counterparts of struct page to
> keep all necessary info.
>
Story will become clearer once VM_PFNMAP'd memory support starts
getting discussed. In case of guest_memfd, there is flexibility to
store metadata for physical ranges within guest_memfd just like
shareability tracking.
>
> > I think the series [2] to work better with PFNMAP'd physical memory in
> > KVM is in the very right direction of not assuming page struct backed
> > memory ranges for guest_memfd as well.
> Note: Currently, VM_PFNMAP is usually used together with flag VM_IO. in KVM
> hva_to_pfn_remapped() only applies to "vma->vm_flags & (VM_IO | VM_PFNMAP)".
>
>
> > [1] https://lore.kernel.org/all/CAGtprH8akKUF=8+RkX_QMjp35C0bU1zxGi4v1Zm5AWCw=8V8AQ@mail.gmail.com/
> > [2] https://lore.kernel.org/linux-arm-kernel/20241010182427.1434605-1-seanjc@google.com/
> >
> > > And even in that scenario, the memory is only for private MMIO, so the backend
> > > driver is VFIO pci driver rather than guest_memfd.
> >
> > Not necessary. As I mentioned above guest_memfd ranges will be backed
> > by VM_PFNMAP memory.
> >
> > >
> > >
> > > > [1] https://elixir.bootlin.com/linux/v6.14.5/source/mm/memory.c#L6543
next prev parent reply other threads:[~2025-05-07 14:56 UTC|newest]
Thread overview: 294+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-24 3:00 [RFC PATCH 00/21] KVM: TDX huge page support for private memory Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 01/21] KVM: gmem: Allocate 2M huge page from guest_memfd backend Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-04-24 7:48 ` Kirill A. Shutemov
2025-04-24 8:41 ` Yan Zhao
2025-04-25 6:51 ` Binbin Wu
2025-04-25 7:19 ` Yan Zhao
2025-05-13 18:52 ` Edgecombe, Rick P
2025-05-16 9:05 ` Yan Zhao
2025-05-16 17:10 ` Edgecombe, Rick P
2025-06-19 9:26 ` Nikolay Borisov
2025-06-23 9:32 ` Yan Zhao
2025-05-15 2:16 ` Chao Gao
2025-05-16 9:07 ` Yan Zhao
2025-07-08 8:48 ` Yan Zhao
2025-07-08 13:55 ` Edgecombe, Rick P
2025-07-08 15:29 ` Vishal Annapurve
2025-07-08 15:32 ` Edgecombe, Rick P
2025-07-08 22:06 ` Vishal Annapurve
2025-07-08 23:16 ` Edgecombe, Rick P
2025-07-08 23:31 ` Vishal Annapurve
2025-07-09 2:23 ` Yan Zhao
2025-07-09 14:08 ` Edgecombe, Rick P
2025-04-24 3:04 ` [RFC PATCH 03/21] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-04-25 7:12 ` Binbin Wu
2025-04-25 7:17 ` Yan Zhao
2025-04-25 7:25 ` Binbin Wu
2025-04-25 9:24 ` Yan Zhao
2025-05-13 18:19 ` Edgecombe, Rick P
2025-05-15 8:26 ` Yan Zhao
2025-05-15 17:28 ` Edgecombe, Rick P
2025-05-16 2:23 ` Yan Zhao
2025-07-01 21:15 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 04/21] KVM: TDX: Enforce 4KB mapping level during TD build Time Yan Zhao
2025-04-24 7:55 ` Kirill A. Shutemov
2025-04-24 8:49 ` Yan Zhao
2025-05-13 19:12 ` Edgecombe, Rick P
2025-05-15 9:16 ` Yan Zhao
2025-05-15 17:32 ` Edgecombe, Rick P
2025-05-16 10:05 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 05/21] KVM: TDX: Enhance tdx_clear_page() to support huge pages Yan Zhao
2025-05-13 19:17 ` Edgecombe, Rick P
2025-05-16 2:02 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 06/21] KVM: TDX: Assert the reclaimed pages were mapped as expected Yan Zhao
2025-05-13 19:25 ` Edgecombe, Rick P
2025-05-16 2:11 ` Yan Zhao
2025-05-16 17:34 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 07/21] KVM: TDX: Add a helper for WBINVD on huge pages with TD's keyID Yan Zhao
2025-05-06 8:37 ` Binbin Wu
2025-05-16 3:10 ` Yan Zhao
2025-05-13 19:29 ` Edgecombe, Rick P
2025-05-16 3:03 ` Yan Zhao
2025-05-16 17:35 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages Yan Zhao
2025-04-29 0:17 ` Vishal Annapurve
2025-04-29 0:49 ` Yan Zhao
2025-04-29 13:46 ` Vishal Annapurve
2025-05-06 0:53 ` Yan Zhao
2025-05-06 5:08 ` Vishal Annapurve
2025-05-06 6:04 ` Yan Zhao
2025-05-06 13:18 ` Vishal Annapurve
2025-05-07 7:37 ` Yan Zhao
2025-05-07 14:56 ` Vishal Annapurve [this message]
2025-05-08 1:30 ` Yan Zhao
2025-05-08 14:10 ` Vishal Annapurve
2025-05-09 3:20 ` Yan Zhao
2025-05-09 14:20 ` Vishal Annapurve
2025-05-09 23:45 ` Edgecombe, Rick P
2025-05-10 0:41 ` Vishal Annapurve
2025-05-12 21:59 ` Edgecombe, Rick P
2025-05-12 2:15 ` Yan Zhao
2025-05-12 16:53 ` Vishal Annapurve
2025-05-15 3:01 ` Yan Zhao
2025-06-04 20:02 ` Ackerley Tng
2025-06-05 2:42 ` Yan Zhao
2025-06-05 21:12 ` Ackerley Tng
2025-06-16 10:43 ` Yan Zhao
2025-06-16 23:27 ` Edgecombe, Rick P
2025-06-11 14:30 ` Vishal Annapurve
2025-06-16 9:59 ` Yan Zhao
2025-06-17 0:12 ` Edgecombe, Rick P
2025-06-17 1:38 ` Yan Zhao
2025-06-17 15:52 ` Edgecombe, Rick P
2025-06-18 0:19 ` Yan Zhao
2025-06-18 0:41 ` Edgecombe, Rick P
2025-06-23 9:27 ` Yan Zhao
2025-06-23 18:20 ` Edgecombe, Rick P
[not found] ` <draft-diqzh606mcz0.fsf@ackerleytng-ctop.c.googlers.com>
2025-06-23 22:48 ` Ackerley Tng
2025-06-24 10:18 ` Yan Zhao
2025-06-24 21:29 ` Ackerley Tng
2025-06-24 22:22 ` Edgecombe, Rick P
2025-06-24 22:00 ` Edgecombe, Rick P
2025-06-24 22:14 ` Edgecombe, Rick P
2025-06-24 23:30 ` Ackerley Tng
2025-06-25 0:01 ` Edgecombe, Rick P
2025-06-25 7:29 ` Yan Zhao
2025-06-25 23:09 ` Ackerley Tng
2025-06-25 23:19 ` Edgecombe, Rick P
2025-06-26 15:16 ` Shutemov, Kirill
2025-06-26 22:19 ` Edgecombe, Rick P
2025-06-27 17:59 ` Ackerley Tng
2025-06-30 11:13 ` Yan Zhao
2025-06-30 17:55 ` Edgecombe, Rick P
2025-06-30 19:25 ` Ackerley Tng
2025-06-30 21:45 ` Edgecombe, Rick P
2025-07-01 5:01 ` Yan Zhao
2025-07-01 5:22 ` Vishal Annapurve
2025-07-01 6:03 ` Yan Zhao
2025-07-01 7:13 ` Vishal Annapurve
2025-07-01 14:15 ` Edgecombe, Rick P
2025-07-01 22:09 ` Ackerley Tng
2025-07-02 11:24 ` Yan Zhao
2025-07-02 18:43 ` Ackerley Tng
2025-07-03 4:54 ` Yan Zhao
2025-07-14 19:32 ` Ackerley Tng
2025-07-01 16:13 ` Edgecombe, Rick P
2025-07-01 21:48 ` Ackerley Tng
2025-07-01 21:57 ` Ackerley Tng
2025-07-01 22:37 ` Edgecombe, Rick P
2025-07-02 20:57 ` Ackerley Tng
2025-07-02 23:51 ` Edgecombe, Rick P
2025-07-08 21:19 ` Ackerley Tng
2025-07-11 1:46 ` Edgecombe, Rick P
2025-07-11 5:12 ` Yan Zhao
2025-07-11 16:14 ` Edgecombe, Rick P
2025-07-14 19:49 ` Ackerley Tng
2025-07-15 15:08 ` Edgecombe, Rick P
2025-07-15 22:31 ` Ackerley Tng
2025-07-02 9:08 ` Yan Zhao
2025-07-02 15:28 ` Edgecombe, Rick P
2025-07-01 5:07 ` Yan Zhao
2025-07-01 22:01 ` Ackerley Tng
2025-07-01 22:26 ` Ackerley Tng
2025-06-30 21:47 ` Vishal Annapurve
2025-07-01 9:35 ` Yan Zhao
2025-07-01 13:32 ` Vishal Annapurve
2025-07-01 14:02 ` Vishal Annapurve
2025-07-01 15:42 ` Edgecombe, Rick P
2025-07-01 16:14 ` Edgecombe, Rick P
2025-07-02 8:54 ` Yan Zhao
2025-07-02 13:12 ` Vishal Annapurve
2025-06-25 7:08 ` Yan Zhao
2025-06-25 22:54 ` Ackerley Tng
2025-06-24 22:03 ` Edgecombe, Rick P
2025-06-17 0:25 ` Edgecombe, Rick P
2025-06-17 2:00 ` Yan Zhao
2025-06-17 3:51 ` Vishal Annapurve
2025-06-17 6:52 ` Yan Zhao
2025-06-17 8:09 ` Vishal Annapurve
2025-06-17 9:57 ` Yan Zhao
2025-06-18 4:25 ` Vishal Annapurve
2025-06-18 0:34 ` Edgecombe, Rick P
2025-06-18 0:46 ` Yan Zhao
2025-06-18 4:33 ` Vishal Annapurve
2025-06-18 6:13 ` Yan Zhao
2025-06-18 6:21 ` Vishal Annapurve
2025-06-18 6:32 ` Yan Zhao
2025-06-18 6:44 ` Vishal Annapurve
2025-06-18 6:57 ` Yan Zhao
2025-06-18 4:29 ` Vishal Annapurve
2025-06-19 0:22 ` Edgecombe, Rick P
2025-06-05 2:47 ` Yan Zhao
2025-06-05 22:35 ` Ackerley Tng
2025-06-19 8:11 ` Yan Zhao
2025-06-20 18:06 ` Vishal Annapurve
2025-07-16 1:23 ` Yan Zhao
2025-07-16 20:57 ` Ackerley Tng
2025-07-18 5:49 ` Yan Zhao
2025-07-22 5:33 ` Ackerley Tng
2025-07-22 6:37 ` Yan Zhao
2025-07-22 17:55 ` Ackerley Tng
2025-05-12 19:00 ` Ackerley Tng
2025-05-12 21:44 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 09/21] KVM: TDX: Enable 2MB mapping size after TD is RUNNABLE Yan Zhao
2025-05-13 20:10 ` Edgecombe, Rick P
2025-05-16 1:35 ` Huang, Kai
2025-05-16 9:43 ` Yan Zhao
2025-05-16 22:35 ` Huang, Kai
2025-05-16 23:47 ` Edgecombe, Rick P
2025-05-19 8:32 ` Yan Zhao
2025-05-19 16:53 ` Edgecombe, Rick P
2025-05-20 9:34 ` Yan Zhao
2025-05-20 23:47 ` Huang, Kai
2025-06-11 14:42 ` Sean Christopherson
2025-06-12 23:39 ` Edgecombe, Rick P
2025-06-13 0:19 ` Sean Christopherson
2025-06-13 0:25 ` Edgecombe, Rick P
2025-06-13 0:44 ` Sean Christopherson
2025-06-13 0:47 ` Edgecombe, Rick P
2025-06-13 1:32 ` Yan Zhao
2025-06-13 21:53 ` Edgecombe, Rick P
2025-06-13 22:19 ` Sean Christopherson
2025-06-13 23:33 ` Edgecombe, Rick P
2025-06-16 3:14 ` Yan Zhao
2025-06-16 22:49 ` Edgecombe, Rick P
2025-06-17 0:52 ` Yan Zhao
2025-06-18 0:30 ` Yan Zhao
2025-06-20 16:31 ` Sean Christopherson
2025-06-23 21:44 ` Edgecombe, Rick P
2025-06-24 9:57 ` Yan Zhao
2025-06-24 18:35 ` Edgecombe, Rick P
2025-06-25 9:28 ` Yan Zhao
2025-06-25 9:36 ` Yan Zhao
2025-06-25 14:48 ` Edgecombe, Rick P
2025-06-26 0:50 ` Yan Zhao
2025-06-25 14:47 ` Edgecombe, Rick P
2025-06-26 8:53 ` Yan Zhao
2025-07-01 0:42 ` Edgecombe, Rick P
2025-07-01 2:41 ` Yan Zhao
2025-07-01 15:36 ` Edgecombe, Rick P
2025-07-02 0:12 ` Yan Zhao
2025-07-02 0:18 ` Edgecombe, Rick P
2025-07-02 1:07 ` Yan Zhao
2025-07-02 15:26 ` Edgecombe, Rick P
2025-07-02 3:31 ` Yan Zhao
2025-06-25 13:47 ` Vishal Annapurve
2025-06-25 15:51 ` Edgecombe, Rick P
2025-06-18 1:22 ` Edgecombe, Rick P
2025-06-18 11:32 ` Shutemov, Kirill
2025-06-20 16:32 ` Sean Christopherson
2025-06-20 17:44 ` Kirill Shutemov
2025-06-20 18:40 ` Sean Christopherson
2025-06-20 19:26 ` Kirill Shutemov
2025-06-13 2:41 ` Xiaoyao Li
2025-06-13 3:29 ` Yan Zhao
2025-06-13 5:35 ` Yan Zhao
2025-06-13 6:08 ` Xiaoyao Li
2025-05-21 15:40 ` Edgecombe, Rick P
2025-05-22 3:52 ` Yan Zhao
2025-05-23 23:40 ` Edgecombe, Rick P
2025-05-27 1:31 ` Yan Zhao
2025-05-20 23:34 ` Huang, Kai
2025-05-21 2:35 ` Yan Zhao
2025-05-16 9:28 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 10/21] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-05-13 20:15 ` Edgecombe, Rick P
2025-05-16 4:01 ` Yan Zhao
2025-05-16 17:50 ` Edgecombe, Rick P
2025-05-19 3:57 ` Yan Zhao
2025-05-19 17:42 ` Edgecombe, Rick P
2025-05-20 10:11 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 11/21] KVM: x86: Add "vcpu" "gfn" parameters to x86 hook private_max_mapping_level Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 12/21] KVM: TDX: Determine max mapping level according to vCPU's ACCEPT level Yan Zhao
2025-05-13 21:20 ` Edgecombe, Rick P
2025-05-16 6:12 ` Xiaoyao Li
2025-05-16 6:30 ` Yan Zhao
2025-05-16 22:02 ` Edgecombe, Rick P
2025-05-19 6:39 ` Yan Zhao
2025-05-19 20:17 ` Edgecombe, Rick P
2025-04-24 3:07 ` [RFC PATCH 13/21] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 14/21] KVM: x86/tdp_mmu: Invoke split_external_spt hook with exclusive mmu_lock Yan Zhao
2025-05-13 23:06 ` Edgecombe, Rick P
2025-05-16 9:17 ` Yan Zhao
2025-05-16 22:11 ` Edgecombe, Rick P
2025-05-19 4:01 ` Yan Zhao
2025-05-19 20:21 ` Edgecombe, Rick P
2025-05-20 5:40 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 15/21] KVM: TDX: Support huge page splitting with exclusive kvm->mmu_lock Yan Zhao
2025-05-20 6:18 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-07-02 15:47 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 16/21] KVM: x86/mmu: Introduce kvm_split_boundary_leafs() to split boundary leafs Yan Zhao
2025-05-13 22:56 ` Edgecombe, Rick P
2025-05-16 7:46 ` Yan Zhao
2025-05-16 8:03 ` Yan Zhao
2025-05-16 22:27 ` Edgecombe, Rick P
2025-05-19 8:12 ` Yan Zhao
2025-05-16 11:44 ` Yan Zhao
2025-05-16 22:16 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 17/21] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 18/21] KVM: x86: Split huge boundary leafs before private to shared conversion Yan Zhao
2025-05-09 23:34 ` Edgecombe, Rick P
2025-05-12 2:25 ` Yan Zhao
2025-05-12 21:53 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 19/21] KVM: gmem: Split huge boundary leafs for punch hole of private memory Yan Zhao
2025-04-24 10:19 ` Francesco Lavra
2025-04-25 1:55 ` Yan Zhao
2025-05-13 22:59 ` Edgecombe, Rick P
2025-05-16 8:19 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 20/21] KVM: x86: Force a prefetch fault's max mapping level to 4KB for TDX Yan Zhao
2025-05-13 23:20 ` Edgecombe, Rick P
2025-05-16 8:43 ` Yan Zhao
2025-05-21 3:30 ` Binbin Wu
2025-05-21 5:03 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 21/21] KVM: x86: Ignore splitting huge pages in fault path " Yan Zhao
2025-05-13 21:58 ` Edgecombe, Rick P
2025-05-16 6:40 ` Yan Zhao
2025-04-24 7:35 ` [RFC PATCH 00/21] KVM: TDX huge page support for private memory Kirill A. Shutemov
2025-04-24 8:33 ` Yan Zhao
2025-04-24 9:05 ` Kirill A. Shutemov
2025-04-24 9:08 ` Juergen Gross
2025-04-24 9:49 ` Yan Zhao
2025-04-24 10:39 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGtprH-+Bo4hFxL+THiMgF5V4imdVVb0OmRhx2Uc0eom9=3JPA@mail.gmail.com' \
--to=vannapurve@google.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jroedel@suse.de \
--cc=jun.miao@intel.com \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).