From: Vishal Annapurve <vannapurve@google.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
Cc: "seanjc@google.com" <seanjc@google.com>,
"pvorel@suse.cz" <pvorel@suse.cz>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
"Miao, Jun" <jun.miao@intel.com>,
"Shutemov, Kirill" <kirill.shutemov@intel.com>,
"pdurrant@amazon.co.uk" <pdurrant@amazon.co.uk>,
"steven.price@arm.com" <steven.price@arm.com>,
"peterx@redhat.com" <peterx@redhat.com>,
"x86@kernel.org" <x86@kernel.org>,
"amoorthy@google.com" <amoorthy@google.com>,
"tabba@google.com" <tabba@google.com>,
"quic_svaddagi@quicinc.com" <quic_svaddagi@quicinc.com>,
"maz@kernel.org" <maz@kernel.org>,
"vkuznets@redhat.com" <vkuznets@redhat.com>,
"quic_eberman@quicinc.com" <quic_eberman@quicinc.com>,
"keirf@google.com" <keirf@google.com>,
"hughd@google.com" <hughd@google.com>,
"mail@maciej.szmigiero.name" <mail@maciej.szmigiero.name>,
"palmer@dabbelt.com" <palmer@dabbelt.com>,
"Wieczor-Retman, Maciej" <maciej.wieczor-retman@intel.com>,
"Zhao, Yan Y" <yan.y.zhao@intel.com>,
"ajones@ventanamicro.com" <ajones@ventanamicro.com>,
"willy@infradead.org" <willy@infradead.org>,
"jack@suse.cz" <jack@suse.cz>,
"paul.walmsley@sifive.com" <paul.walmsley@sifive.com>,
"aik@amd.com" <aik@amd.com>,
"usama.arif@bytedance.com" <usama.arif@bytedance.com>,
"quic_mnalajal@quicinc.com" <quic_mnalajal@quicinc.com>,
"fvdl@google.com" <fvdl@google.com>,
"rppt@kernel.org" <rppt@kernel.org>,
"quic_cvanscha@quicinc.com" <quic_cvanscha@quicinc.com>,
"nsaenz@amazon.es" <nsaenz@amazon.es>,
"vbabka@suse.cz" <vbabka@suse.cz>, "Du, Fan" <fan.du@intel.com>,
"anthony.yznaga@oracle.com" <anthony.yznaga@oracle.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"thomas.lendacky@amd.com" <thomas.lendacky@amd.com>,
"mic@digikod.net" <mic@digikod.net>,
"oliver.upton@linux.dev" <oliver.upton@linux.dev>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"bfoster@redhat.com" <bfoster@redhat.com>,
"binbin.wu@linux.intel.com" <binbin.wu@linux.intel.com>,
"muchun.song@linux.dev" <muchun.song@linux.dev>,
"Li, Zhiquan1" <zhiquan1.li@intel.com>,
"rientjes@google.com" <rientjes@google.com>,
"mpe@ellerman.id.au" <mpe@ellerman.id.au>,
"Aktas, Erdem" <erdemaktas@google.com>,
"david@redhat.com" <david@redhat.com>,
"jgg@ziepe.ca" <jgg@ziepe.ca>,
"jhubbard@nvidia.com" <jhubbard@nvidia.com>,
"Xu, Haibo1" <haibo1.xu@intel.com>,
"anup@brainfault.org" <anup@brainfault.org>,
"Hansen, Dave" <dave.hansen@intel.com>,
"Yamahata, Isaku" <isaku.yamahata@intel.com>,
"jthoughton@google.com" <jthoughton@google.com>,
"Wang, Wei W" <wei.w.wang@intel.com>,
"steven.sistare@oracle.com" <steven.sistare@oracle.com>,
"jarkko@kernel.org" <jarkko@kernel.org>,
"quic_pheragu@quicinc.com" <quic_pheragu@quicinc.com>,
"chenhuacai@kernel.org" <chenhuacai@kernel.org>,
"Huang, Kai" <kai.huang@intel.com>,
"shuah@kernel.org" <shuah@kernel.org>,
"dwmw@amazon.co.uk" <dwmw@amazon.co.uk>,
"pankaj.gupta@amd.com" <pankaj.gupta@amd.com>,
"Peng, Chao P" <chao.p.peng@intel.com>,
"nikunj@amd.com" <nikunj@amd.com>,
"Graf, Alexander" <graf@amazon.com>,
"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"yuzenghui@huawei.com" <yuzenghui@huawei.com>,
"jroedel@suse.de" <jroedel@suse.de>,
"suzuki.poulose@arm.com" <suzuki.poulose@arm.com>,
"jgowans@amazon.com" <jgowans@amazon.com>,
"Xu, Yilun" <yilun.xu@intel.com>,
"liam.merwick@oracle.com" <liam.merwick@oracle.com>,
"michael.roth@amd.com" <michael.roth@amd.com>,
"quic_tsoni@quicinc.com" <quic_tsoni@quicinc.com>,
"richard.weiyang@gmail.com" <richard.weiyang@gmail.com>,
"Weiny, Ira" <ira.weiny@intel.com>,
"aou@eecs.berkeley.edu" <aou@eecs.berkeley.edu>,
"Li, Xiaoyao" <xiaoyao.li@intel.com>,
"qperret@google.com" <qperret@google.com>,
"kent.overstreet@linux.dev" <kent.overstreet@linux.dev>,
"dmatlack@google.com" <dmatlack@google.com>,
"james.morse@arm.com" <james.morse@arm.com>,
"brauner@kernel.org" <brauner@kernel.org>,
"ackerleytng@google.com" <ackerleytng@google.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"pgonda@google.com" <pgonda@google.com>,
"quic_pderrin@quicinc.com" <quic_pderrin@quicinc.com>,
"roypat@amazon.co.uk" <roypat@amazon.co.uk>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"will@kernel.org" <will@kernel.org>,
"hch@infradead.org" <hch@infradead.org>
Subject: Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd
Date: Fri, 16 May 2025 06:11:56 -0700 [thread overview]
Message-ID: <CAGtprH8EMnmvvVir6_U+L5S3SEvrU1OzLrvkL58fXgfg59bjoA@mail.gmail.com> (raw)
In-Reply-To: <7d3b391f3a31396bd9abe641259392fd94b5e72f.camel@intel.com>
On Thu, May 15, 2025 at 7:12 PM Edgecombe, Rick P
<rick.p.edgecombe@intel.com> wrote:
>
> On Thu, 2025-05-15 at 17:57 -0700, Sean Christopherson wrote:
> > > > > Thinking from the TDX perspective, we might have bigger fish to fry than
> > > > > 1.6% memory savings (for example dynamic PAMT), and the rest of the
> > > > > benefits don't have numbers. How much are we getting for all the
> > > > > complexity, over say buddy allocated 2MB pages?
> >
> > TDX may have bigger fish to fry, but some of us have bigger fish to fry than
> > TDX :-)
>
> Fair enough. But TDX is on the "roadmap". So it helps to say what the target of
> this series is.
>
> >
> > > > This series should work for any page sizes backed by hugetlb memory.
> > > > Non-CoCo VMs, pKVM and Confidential VMs all need hugepages that are
> > > > essential for certain workloads and will emerge as guest_memfd users.
> > > > Features like KHO/memory persistence in addition also depend on
> > > > hugepage support in guest_memfd.
> > > >
> > > > This series takes strides towards making guest_memfd compatible with
> > > > usecases where 1G pages are essential and non-confidential VMs are
> > > > already exercising them.
> > > >
> > > > I think the main complexity here lies in supporting in-place
> > > > conversion which applies to any huge page size even for buddy
> > > > allocated 2MB pages or THP.
> > > >
> > > > This complexity arises because page structs work at a fixed
> > > > granularity, future roadmap towards not having page structs for guest
> > > > memory (at least private memory to begin with) should help towards
> > > > greatly reducing this complexity.
> > > >
> > > > That being said, DPAMT and huge page EPT mappings for TDX VMs remain
> > > > essential and complement this series well for better memory footprint
> > > > and overall performance of TDX VMs.
> > >
> > > Hmm, this didn't really answer my questions about the concrete benefits.
> > >
> > > I think it would help to include this kind of justification for the 1GB
> > > guestmemfd pages. "essential for certain workloads and will emerge" is a bit
> > > hard to review against...
> > >
> > > I think one of the challenges with coco is that it's almost like a sprint to
> > > reimplement virtualization. But enough things are changing at once that not
> > > all of the normal assumptions hold, so it can't copy all the same solutions.
> > > The recent example was that for TDX huge pages we found that normal
> > > promotion paths weren't actually yielding any benefit for surprising TDX
> > > specific reasons.
> > >
> > > On the TDX side we are also, at least currently, unmapping private pages
> > > while they are mapped shared, so any 1GB pages would get split to 2MB if
> > > there are any shared pages in them. I wonder how many 1GB pages there would
> > > be after all the shared pages are converted. At smaller TD sizes, it could
> > > be not much.
> >
> > You're conflating two different things. guest_memfd allocating and managing
> > 1GiB physical pages, and KVM mapping memory into the guest at 1GiB/2MiB
> > granularity. Allocating memory in 1GiB chunks is useful even if KVM can only
> > map memory into the guest using 4KiB pages.
>
> I'm aware of the 1.6% vmemmap benefits from the LPC talk. Is there more? The
> list quoted there was more about guest performance. Or maybe the clever page
> table walkers that find contiguous small mappings could benefit guest
> performance too? It's the kind of thing I'd like to see at least broadly called
> out.
The crux of this series really is hugetlb backing support for
guest_memfd and handling CoCo VMs irrespective of the page size as I
suggested earlier, so 2M page sizes will need to handle similar
complexity of in-place conversion.
Google internally uses 1G hugetlb pages to achieve high bandwidth IO,
lower memory footprint using HVO and lower MMU/IOMMU page table memory
footprint among other improvements. These percentages carry a
substantial impact when working at the scale of large fleets of hosts
each carrying significant memory capacity.
guest_memfd hugepage support + hugepage EPT mapping support for TDX
VMs significantly help:
1) ~70% decrease in TDX VM boot up time
2) ~65% decrease in TDX VM shutdown time
3) ~90% decrease in TDX VM PAMT memory overhead
4) Improvement in TDX SEPT memory overhead
And we believe this combination should also help achieve better
performance with TDX connect in future.
Hugetlb huge pages are preferred as they are statically carved out at
boot and so provide much better guarantees of availability. Once the
pages are carved out, any VMs scheduled on such a host will need to
work with the same hugetlb memory sizes. This series attempts to use
hugetlb pages with in-place conversion, avoiding the double allocation
problem that otherwise results in significant memory overheads for
CoCo VMs.
>
> I'm thinking that Google must have a ridiculous amount of learnings about VM
> memory management. And this is probably designed around those learnings. But
> reviewers can't really evaluate it if they don't know the reasons and tradeoffs
> taken. If it's going upstream, I think it should have at least the high level
> reasoning explained.
>
> I don't mean to harp on the point so hard, but I didn't expect it to be
> controversial either.
>
> >
> > > So for TDX in isolation, it seems like jumping out too far ahead to
> > > effectively consider the value. But presumably you guys are testing this on
> > > SEV or something? Have you measured any performance improvement? For what
> > > kind of applications? Or is the idea to basically to make guestmemfd work
> > > like however Google does guest memory?
> >
> > The longer term goal of guest_memfd is to make it suitable for backing all
> > VMs, hence Vishal's "Non-CoCo VMs" comment.
>
> Oh, I actually wasn't aware of this. Or maybe I remember now. I thought he was
> talking about pKVM.
>
> > Yes, some of this is useful for TDX, but we (and others) want to use
> > guest_memfd for far more than just CoCo VMs.
>
>
> > And for non-CoCo VMs, 1GiB hugepages are mandatory for various workloads.
> I've heard this a lot. It must be true, but I've never seen the actual numbers.
> For a long time people believed 1GB huge pages on the direct map were critical,
> but then benchmarking on a contemporary CPU couldn't find much difference
> between 2MB and 1GB. I'd expect TDP huge pages to be different than that because
> the combined walks are huge, iTLB, etc, but I'd love to see a real number.
next prev parent reply other threads:[~2025-05-16 13:12 UTC|newest]
Thread overview: 231+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-14 23:41 [RFC PATCH v2 00/51] 1G page support for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 01/51] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 02/51] KVM: guest_memfd: Introduce and use shareability to guard faulting Ackerley Tng
2025-05-27 3:54 ` Yan Zhao
2025-05-29 18:20 ` Ackerley Tng
2025-05-30 8:53 ` Fuad Tabba
2025-05-30 18:32 ` Ackerley Tng
2025-06-02 9:43 ` Fuad Tabba
2025-05-27 8:25 ` Binbin Wu
2025-05-27 8:43 ` Binbin Wu
2025-05-29 18:26 ` Ackerley Tng
2025-05-29 20:37 ` Ackerley Tng
2025-05-29 5:42 ` Michael Roth
2025-06-11 21:51 ` Ackerley Tng
2025-07-02 23:25 ` Michael Roth
2025-07-03 0:46 ` Vishal Annapurve
2025-07-03 0:52 ` Vishal Annapurve
2025-07-03 4:12 ` Michael Roth
2025-07-03 5:10 ` Vishal Annapurve
2025-07-03 20:39 ` Michael Roth
2025-07-07 14:55 ` Vishal Annapurve
2025-07-12 0:10 ` Michael Roth
2025-07-12 17:53 ` Vishal Annapurve
2025-08-12 8:23 ` Fuad Tabba
2025-08-13 17:11 ` Ira Weiny
2025-06-11 22:10 ` Ackerley Tng
2025-08-01 0:01 ` Yan Zhao
2025-08-14 21:35 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 03/51] KVM: selftests: Update guest_memfd_test for INIT_PRIVATE flag Ackerley Tng
2025-05-15 13:49 ` Ira Weiny
2025-05-16 17:42 ` Ackerley Tng
2025-05-16 19:31 ` Ira Weiny
2025-05-27 8:53 ` Binbin Wu
2025-05-30 19:59 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 04/51] KVM: guest_memfd: Introduce KVM_GMEM_CONVERT_SHARED/PRIVATE ioctls Ackerley Tng
2025-05-15 14:50 ` Ira Weiny
2025-05-16 17:53 ` Ackerley Tng
2025-05-20 9:22 ` Fuad Tabba
2025-05-20 13:02 ` Vishal Annapurve
2025-05-20 13:44 ` Fuad Tabba
2025-05-20 14:11 ` Vishal Annapurve
2025-05-20 14:33 ` Fuad Tabba
2025-05-20 16:02 ` Vishal Annapurve
2025-05-20 18:05 ` Fuad Tabba
2025-05-20 19:40 ` Ackerley Tng
2025-05-21 12:36 ` Fuad Tabba
2025-05-21 14:42 ` Vishal Annapurve
2025-05-21 15:21 ` Fuad Tabba
2025-05-21 15:51 ` Vishal Annapurve
2025-05-21 18:27 ` Fuad Tabba
2025-05-22 14:52 ` Sean Christopherson
2025-05-22 15:07 ` Fuad Tabba
2025-05-22 16:26 ` Sean Christopherson
2025-05-23 10:12 ` Fuad Tabba
2025-06-24 8:23 ` Alexey Kardashevskiy
2025-06-24 13:08 ` Jason Gunthorpe
2025-06-24 14:10 ` Vishal Annapurve
2025-06-27 4:49 ` Alexey Kardashevskiy
2025-06-27 15:17 ` Vishal Annapurve
2025-06-30 0:19 ` Alexey Kardashevskiy
2025-06-30 14:19 ` Vishal Annapurve
2025-07-10 6:57 ` Alexey Kardashevskiy
2025-07-10 17:58 ` Jason Gunthorpe
2025-07-02 8:35 ` Yan Zhao
2025-07-02 13:54 ` Vishal Annapurve
2025-07-02 14:13 ` Jason Gunthorpe
2025-07-02 14:32 ` Vishal Annapurve
2025-07-10 10:50 ` Xu Yilun
2025-07-10 17:54 ` Jason Gunthorpe
2025-07-11 4:31 ` Xu Yilun
2025-07-11 9:33 ` Xu Yilun
2025-07-16 22:22 ` Ackerley Tng
2025-07-17 9:32 ` Xu Yilun
2025-07-17 16:56 ` Ackerley Tng
2025-07-18 2:48 ` Xu Yilun
2025-07-18 14:15 ` Jason Gunthorpe
2025-07-21 14:18 ` Xu Yilun
2025-07-18 15:13 ` Ira Weiny
2025-07-21 9:58 ` Xu Yilun
2025-07-22 18:17 ` Ackerley Tng
2025-07-22 19:25 ` Edgecombe, Rick P
2025-05-28 3:16 ` Binbin Wu
2025-05-30 20:10 ` Ackerley Tng
2025-06-03 0:54 ` Binbin Wu
2025-05-14 23:41 ` [RFC PATCH v2 05/51] KVM: guest_memfd: Skip LRU for guest_memfd folios Ackerley Tng
2025-05-28 7:01 ` Binbin Wu
2025-05-30 20:32 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 06/51] KVM: Query guest_memfd for private/shared status Ackerley Tng
2025-05-27 3:55 ` Yan Zhao
2025-05-28 8:08 ` Binbin Wu
2025-05-28 9:55 ` Yan Zhao
2025-05-14 23:41 ` [RFC PATCH v2 07/51] KVM: guest_memfd: Add CAP KVM_CAP_GMEM_CONVERSION Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 08/51] KVM: selftests: Test flag validity after guest_memfd supports conversions Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 09/51] KVM: selftests: Test faulting with respect to GUEST_MEMFD_FLAG_INIT_PRIVATE Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 10/51] KVM: selftests: Refactor vm_mem_add to be more flexible Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 11/51] KVM: selftests: Allow cleanup of ucall_pool from host Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 12/51] KVM: selftests: Test conversion flows for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 13/51] KVM: selftests: Add script to exercise private_mem_conversions_test Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 14/51] KVM: selftests: Update private_mem_conversions_test to mmap guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 15/51] KVM: selftests: Update script to map shared memory from guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 16/51] mm: hugetlb: Consolidate interpretation of gbl_chg within alloc_hugetlb_folio() Ackerley Tng
2025-05-15 2:09 ` Matthew Wilcox
2025-05-28 8:55 ` Binbin Wu
2025-07-07 18:27 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 17/51] mm: hugetlb: Cleanup interpretation of gbl_chg in alloc_hugetlb_folio() Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 18/51] mm: hugetlb: Cleanup interpretation of map_chg_state within alloc_hugetlb_folio() Ackerley Tng
2025-07-07 18:08 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 19/51] mm: hugetlb: Rename alloc_surplus_hugetlb_folio Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 20/51] mm: mempolicy: Refactor out policy_node_nodemask() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 21/51] mm: hugetlb: Inline huge_node() into callers Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 22/51] mm: hugetlb: Refactor hugetlb allocation functions Ackerley Tng
2025-05-31 23:45 ` Ira Weiny
2025-06-13 22:03 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 23/51] mm: hugetlb: Refactor out hugetlb_alloc_folio() Ackerley Tng
2025-06-01 0:38 ` Ira Weiny
2025-06-13 22:07 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 24/51] mm: hugetlb: Add option to create new subpool without using surplus Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 25/51] mm: truncate: Expose preparation steps for truncate_inode_pages_final Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 26/51] mm: Consolidate freeing of typed folios on final folio_put() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 27/51] mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 28/51] mm: Introduce guestmem_hugetlb to support folio_put() handling of guestmem pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 29/51] mm: guestmem_hugetlb: Wrap HugeTLB as an allocator for guest_memfd Ackerley Tng
2025-05-16 14:07 ` Ackerley Tng
2025-05-16 20:33 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 30/51] mm: truncate: Expose truncate_inode_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 31/51] KVM: x86: Set disallow_lpage on base_gfn and guest_memfd pgoff misalignment Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 32/51] KVM: guest_memfd: Support guestmem_hugetlb as custom allocator Ackerley Tng
2025-05-23 10:47 ` Yan Zhao
2025-08-12 9:13 ` Tony Lindgren
2025-05-14 23:42 ` [RFC PATCH v2 33/51] KVM: guest_memfd: Allocate and truncate from " Ackerley Tng
2025-05-21 18:05 ` Vishal Annapurve
2025-05-22 23:12 ` Edgecombe, Rick P
2025-05-28 10:58 ` Yan Zhao
2025-06-03 7:43 ` Binbin Wu
2025-07-16 22:13 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 34/51] mm: hugetlb: Add functions to add/delete folio from hugetlb lists Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 35/51] mm: guestmem_hugetlb: Add support for splitting and merging pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 36/51] mm: Convert split_folio() macro to function Ackerley Tng
2025-05-21 16:40 ` Edgecombe, Rick P
2025-05-14 23:42 ` [RFC PATCH v2 37/51] filemap: Pass address_space mapping to ->free_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 38/51] KVM: guest_memfd: Split allocator pages for guest_memfd use Ackerley Tng
2025-05-22 22:19 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:53 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-05-27 4:30 ` Yan Zhao
2025-05-27 4:38 ` Yan Zhao
2025-06-05 17:50 ` Ackerley Tng
2025-05-27 8:45 ` Yan Zhao
2025-06-05 19:10 ` Ackerley Tng
2025-06-16 11:15 ` Yan Zhao
2025-06-05 5:24 ` Binbin Wu
2025-06-05 19:16 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 39/51] KVM: guest_memfd: Merge and truncate on fallocate(PUNCH_HOLE) Ackerley Tng
2025-05-28 11:00 ` Yan Zhao
2025-05-28 16:39 ` Ackerley Tng
2025-05-29 3:26 ` Yan Zhao
2025-05-14 23:42 ` [RFC PATCH v2 40/51] KVM: guest_memfd: Update kvm_gmem_mapping_order to account for page status Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 41/51] KVM: Add CAP to indicate support for HugeTLB as custom allocator Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 42/51] KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 43/51] KVM: selftests: Update conversion flows test for HugeTLB Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 44/51] KVM: selftests: Test truncation paths of guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 45/51] KVM: selftests: Test allocation and conversion of subfolios Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 46/51] KVM: selftests: Test that guest_memfd usage is reported via hugetlb Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 47/51] KVM: selftests: Support various types of backing sources for private memory Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 48/51] KVM: selftests: Update test for various private memory backing source types Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 49/51] KVM: selftests: Update private_mem_conversions_test.sh to test with HugeTLB pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 50/51] KVM: selftests: Add script to test HugeTLB statistics Ackerley Tng
2025-05-15 18:03 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Edgecombe, Rick P
2025-05-15 18:42 ` Vishal Annapurve
2025-05-15 23:35 ` Edgecombe, Rick P
2025-05-16 0:57 ` Sean Christopherson
2025-05-16 2:12 ` Edgecombe, Rick P
2025-05-16 13:11 ` Vishal Annapurve [this message]
2025-05-16 16:45 ` Edgecombe, Rick P
2025-05-16 17:51 ` Sean Christopherson
2025-05-16 19:14 ` Edgecombe, Rick P
2025-05-16 20:25 ` Dave Hansen
2025-05-16 21:42 ` Edgecombe, Rick P
2025-05-16 17:45 ` Sean Christopherson
2025-05-16 13:09 ` Jason Gunthorpe
2025-05-16 17:04 ` Edgecombe, Rick P
2025-05-16 0:22 ` [RFC PATCH v2 51/51] KVM: selftests: Test guest_memfd for accuracy of st_blocks Ackerley Tng
2025-05-16 19:48 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Ira Weiny
2025-05-16 19:59 ` Ira Weiny
2025-05-16 20:26 ` Ackerley Tng
2025-05-16 22:43 ` Ackerley Tng
2025-06-19 8:13 ` Yan Zhao
2025-06-19 8:59 ` Xiaoyao Li
2025-06-19 9:18 ` Xiaoyao Li
2025-06-19 9:28 ` Yan Zhao
2025-06-19 9:45 ` Xiaoyao Li
2025-06-19 9:49 ` Xiaoyao Li
2025-06-29 18:28 ` Vishal Annapurve
2025-06-30 3:14 ` Yan Zhao
2025-06-30 14:14 ` Vishal Annapurve
2025-07-01 5:23 ` Yan Zhao
2025-07-01 19:48 ` Vishal Annapurve
2025-07-07 23:25 ` Sean Christopherson
2025-07-08 0:14 ` Vishal Annapurve
2025-07-08 1:08 ` Edgecombe, Rick P
2025-07-08 14:20 ` Sean Christopherson
2025-07-08 14:52 ` Edgecombe, Rick P
2025-07-08 15:07 ` Vishal Annapurve
2025-07-08 15:31 ` Edgecombe, Rick P
2025-07-08 17:16 ` Vishal Annapurve
2025-07-08 17:39 ` Edgecombe, Rick P
2025-07-08 18:03 ` Sean Christopherson
2025-07-08 18:13 ` Edgecombe, Rick P
2025-07-08 18:55 ` Sean Christopherson
2025-07-08 21:23 ` Edgecombe, Rick P
2025-07-09 14:28 ` Vishal Annapurve
2025-07-09 15:00 ` Sean Christopherson
2025-07-10 1:30 ` Vishal Annapurve
2025-07-10 23:33 ` Sean Christopherson
2025-07-11 21:18 ` Vishal Annapurve
2025-07-12 17:33 ` Vishal Annapurve
2025-07-09 15:17 ` Edgecombe, Rick P
2025-07-10 3:39 ` Vishal Annapurve
2025-07-08 19:28 ` Vishal Annapurve
2025-07-08 19:58 ` Sean Christopherson
2025-07-08 22:54 ` Vishal Annapurve
2025-07-08 15:38 ` Sean Christopherson
2025-07-08 16:22 ` Fuad Tabba
2025-07-08 17:25 ` Sean Christopherson
2025-07-08 18:37 ` Fuad Tabba
2025-07-16 23:06 ` Ackerley Tng
2025-06-26 23:19 ` Ackerley Tng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGtprH8EMnmvvVir6_U+L5S3SEvrU1OzLrvkL58fXgfg59bjoA@mail.gmail.com \
--to=vannapurve@google.com \
--cc=ackerleytng@google.com \
--cc=aik@amd.com \
--cc=ajones@ventanamicro.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anthony.yznaga@oracle.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bfoster@redhat.com \
--cc=binbin.wu@linux.intel.com \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=chao.p.peng@intel.com \
--cc=chenhuacai@kernel.org \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=dwmw@amazon.co.uk \
--cc=erdemaktas@google.com \
--cc=fan.du@intel.com \
--cc=fvdl@google.com \
--cc=graf@amazon.com \
--cc=haibo1.xu@intel.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jack@suse.cz \
--cc=james.morse@arm.com \
--cc=jarkko@kernel.org \
--cc=jgg@ziepe.ca \
--cc=jgowans@amazon.com \
--cc=jhubbard@nvidia.com \
--cc=jroedel@suse.de \
--cc=jthoughton@google.com \
--cc=jun.miao@intel.com \
--cc=kai.huang@intel.com \
--cc=keirf@google.com \
--cc=kent.overstreet@linux.dev \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=liam.merwick@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maciej.wieczor-retman@intel.com \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=michael.roth@amd.com \
--cc=mpe@ellerman.id.au \
--cc=muchun.song@linux.dev \
--cc=nikunj@amd.com \
--cc=nsaenz@amazon.es \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=pankaj.gupta@amd.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=pdurrant@amazon.co.uk \
--cc=peterx@redhat.com \
--cc=pgonda@google.com \
--cc=pvorel@suse.cz \
--cc=qperret@google.com \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_eberman@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_svaddagi@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=richard.weiyang@gmail.com \
--cc=rick.p.edgecombe@intel.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=steven.price@arm.com \
--cc=steven.sistare@oracle.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=usama.arif@bytedance.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=vkuznets@redhat.com \
--cc=wei.w.wang@intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yilun.xu@intel.com \
--cc=yuzenghui@huawei.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).