linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Ackerley Tng <ackerleytng@google.com>
Cc: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk,
	jgg@nvidia.com, peterx@redhat.com, david@redhat.com,
	rientjes@google.com, fvdl@google.com, jthoughton@google.com,
	seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com,
	fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com,
	muchun.song@linux.dev, mike.kravetz@oracle.com,
	erdemaktas@google.com, vannapurve@google.com, qperret@google.com,
	jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org,
	brauner@kernel.org, bfoster@redhat.com,
	kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org,
	richard.weiyang@gmail.com, anup@brainfault.org,
	haibo1.xu@intel.com, ajones@ventanamicro.com,
	vkuznets@redhat.com, maciej.wieczor-retman@intel.com,
	pgonda@google.com, oliver.upton@linux.dev,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	kvm@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-fsdevel@kvack.org, Oscar Salvador <OSalvador@suse.com>
Subject: Re: [RFC PATCH 00/39] 1G page support for guest_memfd
Date: Wed, 11 Sep 2024 08:56:29 +0200	[thread overview]
Message-ID: <ZuE_Haaz8M6ENprE@tiehlicka> (raw)
In-Reply-To: <cover.1726009989.git.ackerleytng@google.com>

Cc Oscar for awareness

On Tue 10-09-24 23:43:31, Ackerley Tng wrote:
> Hello,
> 
> This patchset is our exploration of how to support 1G pages in guest_memfd, and
> how the pages will be used in Confidential VMs.
> 
> The patchset covers:
> 
> + How to get 1G pages
> + Allowing mmap() of guest_memfd to userspace so that both private and shared
>   memory can use the same physical pages
> + Splitting and reconstructing pages to support conversions and mmap()
> + How the VM, userspace and guest_memfd interact to support conversions
> + Selftests to test all the above
>     + Selftests also demonstrate the conversion flow between VM, userspace and
>       guest_memfd.
> 
> Why 1G pages in guest memfd?
> 
> Bring guest_memfd to performance and memory savings parity with VMs that are
> backed by HugeTLBfs.
> 
> + Performance is improved with 1G pages by more TLB hits and faster page walks
>   on TLB misses.
> + Memory savings from 1G pages comes from HugeTLB Vmemmap Optimization (HVO).
> 
> Options for 1G page support:
> 
> 1. HugeTLB
> 2. Contiguous Memory Allocator (CMA)
> 3. Other suggestions are welcome!
> 
> Comparison between options:
> 
> 1. HugeTLB
>     + Refactor HugeTLB to separate allocator from the rest of HugeTLB
>     + Pro: Graceful transition for VMs backed with HugeTLB to guest_memfd
>         + Near term: Allows co-tenancy of HugeTLB and guest_memfd backed VMs
>     + Pro: Can provide iterative steps toward new future allocator
>         + Unexplored: Managing userspace-visible changes
>             + e.g. HugeTLB's free_hugepages will decrease if HugeTLB is used,
>               but not when future allocator is used
> 2. CMA
>     + Port some HugeTLB features to be applied on CMA
>     + Pro: Clean slate
> 
> What would refactoring HugeTLB involve?
> 
> (Some refactoring was done in this RFC, more can be done.)
> 
> 1. Broadly involves separating the HugeTLB allocator from the rest of HugeTLB
>     + Brings more modularity to HugeTLB
>     + No functionality change intended
>     + Likely step towards HugeTLB's integration into core-mm
> 2. guest_memfd will use just the allocator component of HugeTLB, not including
>    the complex parts of HugeTLB like
>     + Userspace reservations (resv_map)
>     + Shared PMD mappings
>     + Special page walkers
> 
> What features would need to be ported to CMA?
> 
> + Improved allocation guarantees
>     + Per NUMA node pool of huge pages
>     + Subpools per guest_memfd
> + Memory savings
>     + Something like HugeTLB Vmemmap Optimization
> + Configuration/reporting features
>     + Configuration of number of pages available (and per NUMA node) at and
>       after host boot
>     + Reporting of memory usage/availability statistics at runtime
> 
> HugeTLB was picked as the source of 1G pages for this RFC because it allows a
> graceful transition, and retains memory savings from HVO.
> 
> To illustrate this, if a host machine uses HugeTLBfs to back VMs, and a
> confidential VM were to be scheduled on that host, some HugeTLBfs pages would
> have to be given up and returned to CMA for guest_memfd pages to be rebuilt from
> that memory. This requires memory to be reserved for HVO to be removed and
> reapplied on the new guest_memfd memory. This not only slows down memory
> allocation but also trims the benefits of HVO. Memory would have to be reserved
> on the host to facilitate these transitions.
> 
> Improving how guest_memfd uses the allocator in a future revision of this RFC:
> 
> To provide an easier transition away from HugeTLB, guest_memfd's use of HugeTLB
> should be limited to these allocator functions:
> 
> + reserve(node, page_size, num_pages) => opaque handle
>     + Used when a guest_memfd inode is created to reserve memory from backend
>       allocator
> + allocate(handle, mempolicy, page_size) => folio
>     + To allocate a folio from guest_memfd's reservation
> + split(handle, folio, target_page_size) => void
>     + To take a huge folio, and split it to smaller folios, restore to filemap
> + reconstruct(handle, first_folio, nr_pages) => void
>     + To take a folio, and reconstruct a huge folio out of nr_pages from the
>       first_folio
> + free(handle, folio) => void
>     + To return folio to guest_memfd's reservation
> + error(handle, folio) => void
>     + To handle memory errors
> + unreserve(handle) => void
>     + To return guest_memfd's reservation to allocator backend
> 
> Userspace should only provide a page size when creating a guest_memfd and should
> not have to specify HugeTLB.
> 
> Overview of patches:
> 
> + Patches 01-12
>     + Many small changes to HugeTLB, mostly to separate HugeTLBfs concepts from
>       HugeTLB, and to expose HugeTLB functions.
> + Patches 13-16
>     + Letting guest_memfd use HugeTLB
>     + Creation of each guest_memfd reserves pages from HugeTLB's global hstate
>       and puts it into the guest_memfd inode's subpool
>     + Each folio allocation takes a page from the guest_memfd inode's subpool
> + Patches 17-21
>     + Selftests for new HugeTLB features in guest_memfd
> + Patches 22-24
>     + More small changes on the HugeTLB side to expose functions needed by
>       guest_memfd
> + Patch 25:
>     + Uses the newly available functions from patches 22-24 to split HugeTLB
>       pages. In this patch, HugeTLB folios are always split to 4K before any
>       usage, private or shared.
> + Patches 26-28
>     + Allow mmap() in guest_memfd and faulting in shared pages
> + Patch 29
>     + Enables conversion between private/shared pages
> + Patch 30
>     + Required to zero folios after conversions to avoid leaking initialized
>       kernel memory
> + Patch 31-38
>     + Add selftests to test mapping pages to userspace, guest/host memory
>       sharing and update conversions tests
>     + Patch 33 illustrates the conversion flow between VM/userspace/guest_memfd
> + Patch 39
>     + Dynamically split and reconstruct HugeTLB pages instead of always
>       splitting before use. All earlier selftests are expected to still pass.
> 
> TODOs:
> 
> + Add logic to wait for safe_refcount [1]
> + Look into lazy splitting/reconstruction of pages
>     + Currently, when the KVM_SET_MEMORY_ATTRIBUTES is invoked, not only is the
>       mem_attr_array and faultability updated, the pages in the requested range
>       are also split/reconstructed as necessary. We want to look into delaying
>       splitting/reconstruction to fault time.
> + Solve race between folios being faulted in and being truncated
>     + When running private_mem_conversions_test with more than 1 vCPU, a folio
>       getting truncated may get faulted in by another process, causing elevated
>       mapcounts when the folio is freed (VM_BUG_ON_FOLIO).
> + Add intermediate splits (1G should first split to 2M and not split directly to
>   4K)
> + Use guest's lock instead of hugetlb_lock
> + Use multi-index xarray/replace xarray with some other data struct for
>   faultability flag
> + Refactor HugeTLB better, present generic allocator interface
> 
> Please let us know your thoughts on:
> 
> + HugeTLB as the choice of transitional allocator backend
> + Refactoring HugeTLB to provide generic allocator interface
> + Shared/private conversion flow
>     + Requiring user to request kernel to unmap pages from userspace using
>       madvise(MADV_DONTNEED)
>     + Failing conversion on elevated mapcounts/pincounts/refcounts
> + Process of splitting/reconstructing page
> + Anything else!
> 
> [1] https://lore.kernel.org/all/20240829-guest-memfd-lib-v2-0-b9afc1ff3656@quicinc.com/T/
> 
> Ackerley Tng (37):
>   mm: hugetlb: Simplify logic in dequeue_hugetlb_folio_vma()
>   mm: hugetlb: Refactor vma_has_reserves() to should_use_hstate_resv()
>   mm: hugetlb: Remove unnecessary check for avoid_reserve
>   mm: mempolicy: Refactor out policy_node_nodemask()
>   mm: hugetlb: Refactor alloc_buddy_hugetlb_folio_with_mpol() to
>     interpret mempolicy instead of vma
>   mm: hugetlb: Refactor dequeue_hugetlb_folio_vma() to use mpol
>   mm: hugetlb: Refactor out hugetlb_alloc_folio
>   mm: truncate: Expose preparation steps for truncate_inode_pages_final
>   mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages()
>   mm: hugetlb: Add option to create new subpool without using surplus
>   mm: hugetlb: Expose hugetlb_acct_memory()
>   mm: hugetlb: Move and expose hugetlb_zero_partial_page()
>   KVM: guest_memfd: Make guest mem use guest mem inodes instead of
>     anonymous inodes
>   KVM: guest_memfd: hugetlb: initialization and cleanup
>   KVM: guest_memfd: hugetlb: allocate and truncate from hugetlb
>   KVM: guest_memfd: Add page alignment check for hugetlb guest_memfd
>   KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd
>   KVM: selftests: Support various types of backing sources for private
>     memory
>   KVM: selftests: Update test for various private memory backing source
>     types
>   KVM: selftests: Add private_mem_conversions_test.sh
>   KVM: selftests: Test that guest_memfd usage is reported via hugetlb
>   mm: hugetlb: Expose vmemmap optimization functions
>   mm: hugetlb: Expose HugeTLB functions for promoting/demoting pages
>   mm: hugetlb: Add functions to add/move/remove from hugetlb lists
>   KVM: guest_memfd: Track faultability within a struct kvm_gmem_private
>   KVM: guest_memfd: Allow mmapping guest_memfd files
>   KVM: guest_memfd: Use vm_type to determine default faultability
>   KVM: Handle conversions in the SET_MEMORY_ATTRIBUTES ioctl
>   KVM: guest_memfd: Handle folio preparation for guest_memfd mmap
>   KVM: selftests: Allow vm_set_memory_attributes to be used without
>     asserting return value of 0
>   KVM: selftests: Test using guest_memfd memory from userspace
>   KVM: selftests: Test guest_memfd memory sharing between guest and host
>   KVM: selftests: Add notes in private_mem_kvm_exits_test for mmap-able
>     guest_memfd
>   KVM: selftests: Test that pinned pages block KVM from setting memory
>     attributes to PRIVATE
>   KVM: selftests: Refactor vm_mem_add to be more flexible
>   KVM: selftests: Add helper to perform madvise by memslots
>   KVM: selftests: Update private_mem_conversions_test for mmap()able
>     guest_memfd
> 
> Vishal Annapurve (2):
>   KVM: guest_memfd: Split HugeTLB pages for guest_memfd use
>   KVM: guest_memfd: Dynamically split/reconstruct HugeTLB page
> 
>  fs/hugetlbfs/inode.c                          |   35 +-
>  include/linux/hugetlb.h                       |   54 +-
>  include/linux/kvm_host.h                      |    1 +
>  include/linux/mempolicy.h                     |    2 +
>  include/linux/mm.h                            |    1 +
>  include/uapi/linux/kvm.h                      |   26 +
>  include/uapi/linux/magic.h                    |    1 +
>  mm/hugetlb.c                                  |  346 ++--
>  mm/hugetlb_vmemmap.h                          |   11 -
>  mm/mempolicy.c                                |   36 +-
>  mm/truncate.c                                 |   26 +-
>  tools/include/linux/kernel.h                  |    4 +-
>  tools/testing/selftests/kvm/Makefile          |    3 +
>  .../kvm/guest_memfd_hugetlb_reporting_test.c  |  222 +++
>  .../selftests/kvm/guest_memfd_pin_test.c      |  104 ++
>  .../selftests/kvm/guest_memfd_sharing_test.c  |  160 ++
>  .../testing/selftests/kvm/guest_memfd_test.c  |  238 ++-
>  .../testing/selftests/kvm/include/kvm_util.h  |   45 +-
>  .../testing/selftests/kvm/include/test_util.h |   18 +
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  443 +++--
>  tools/testing/selftests/kvm/lib/test_util.c   |   99 ++
>  .../kvm/x86_64/private_mem_conversions_test.c |  158 +-
>  .../x86_64/private_mem_conversions_test.sh    |   91 +
>  .../kvm/x86_64/private_mem_kvm_exits_test.c   |   11 +-
>  virt/kvm/guest_memfd.c                        | 1563 ++++++++++++++++-
>  virt/kvm/kvm_main.c                           |   17 +
>  virt/kvm/kvm_mm.h                             |   16 +
>  27 files changed, 3288 insertions(+), 443 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c
>  create mode 100644 tools/testing/selftests/kvm/guest_memfd_pin_test.c
>  create mode 100644 tools/testing/selftests/kvm/guest_memfd_sharing_test.c
>  create mode 100755 tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.sh
> 
> --
> 2.46.0.598.g6f2099f65c-goog

-- 
Michal Hocko
SUSE Labs


  parent reply	other threads:[~2024-09-11  6:56 UTC|newest]

Thread overview: 130+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-10 23:43 [RFC PATCH 00/39] 1G page support for guest_memfd Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 01/39] mm: hugetlb: Simplify logic in dequeue_hugetlb_folio_vma() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 02/39] mm: hugetlb: Refactor vma_has_reserves() to should_use_hstate_resv() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 03/39] mm: hugetlb: Remove unnecessary check for avoid_reserve Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 04/39] mm: mempolicy: Refactor out policy_node_nodemask() Ackerley Tng
2024-09-11 16:46   ` Gregory Price
2024-09-10 23:43 ` [RFC PATCH 05/39] mm: hugetlb: Refactor alloc_buddy_hugetlb_folio_with_mpol() to interpret mempolicy instead of vma Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 06/39] mm: hugetlb: Refactor dequeue_hugetlb_folio_vma() to use mpol Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 07/39] mm: hugetlb: Refactor out hugetlb_alloc_folio Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 08/39] mm: truncate: Expose preparation steps for truncate_inode_pages_final Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 09/39] mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 10/39] mm: hugetlb: Add option to create new subpool without using surplus Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 11/39] mm: hugetlb: Expose hugetlb_acct_memory() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 12/39] mm: hugetlb: Move and expose hugetlb_zero_partial_page() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 13/39] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Ackerley Tng
2025-04-02  4:01   ` Yan Zhao
2025-04-23 20:22     ` Ackerley Tng
2025-04-24  3:53       ` Yan Zhao
2024-09-10 23:43 ` [RFC PATCH 14/39] KVM: guest_memfd: hugetlb: initialization and cleanup Ackerley Tng
2024-09-20  9:17   ` Vishal Annapurve
2024-10-01 23:00     ` Ackerley Tng
2024-12-01 17:59   ` Peter Xu
2025-02-13  9:47     ` Ackerley Tng
2025-02-26 18:55       ` Ackerley Tng
2025-03-06 17:33   ` Peter Xu
2024-09-10 23:43 ` [RFC PATCH 15/39] KVM: guest_memfd: hugetlb: allocate and truncate from hugetlb Ackerley Tng
2024-09-13 22:26   ` Elliot Berman
2024-10-03 20:23     ` Ackerley Tng
2024-10-30  9:01   ` Jun Miao
2025-02-11  1:21     ` Ackerley Tng
2024-12-01 17:55   ` Peter Xu
2025-02-13  7:52     ` Ackerley Tng
2025-02-13 16:48       ` Peter Xu
2024-09-10 23:43 ` [RFC PATCH 16/39] KVM: guest_memfd: Add page alignment check for hugetlb guest_memfd Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 17/39] KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 18/39] KVM: selftests: Support various types of backing sources for private memory Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 19/39] KVM: selftests: Update test for various private memory backing source types Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 20/39] KVM: selftests: Add private_mem_conversions_test.sh Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 21/39] KVM: selftests: Test that guest_memfd usage is reported via hugetlb Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 22/39] mm: hugetlb: Expose vmemmap optimization functions Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 23/39] mm: hugetlb: Expose HugeTLB functions for promoting/demoting pages Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 24/39] mm: hugetlb: Add functions to add/move/remove from hugetlb lists Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 25/39] KVM: guest_memfd: Split HugeTLB pages for guest_memfd use Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 26/39] KVM: guest_memfd: Track faultability within a struct kvm_gmem_private Ackerley Tng
2024-10-10 16:06   ` Peter Xu
2024-10-11 23:32     ` Ackerley Tng
2024-10-15 21:34       ` Peter Xu
2024-10-15 23:42         ` Ackerley Tng
2024-10-16  8:45           ` David Hildenbrand
2024-10-16 20:16             ` Peter Xu
2024-10-16 22:51               ` Jason Gunthorpe
2024-10-16 23:49                 ` Peter Xu
2024-10-16 23:54                   ` Jason Gunthorpe
2024-10-17 14:58                     ` Peter Xu
2024-10-17 16:47                       ` Jason Gunthorpe
2024-10-17 17:05                         ` Peter Xu
2024-10-17 17:10                           ` Jason Gunthorpe
2024-10-17 19:11                             ` Peter Xu
2024-10-17 19:18                               ` Jason Gunthorpe
2024-10-17 19:29                                 ` David Hildenbrand
2024-10-18  7:15                                 ` Patrick Roy
2024-10-18  7:50                                   ` David Hildenbrand
2024-10-18  9:34                                     ` Patrick Roy
2024-10-17 17:11                         ` David Hildenbrand
2024-10-17 17:16                           ` Jason Gunthorpe
2024-10-17 17:55                             ` David Hildenbrand
2024-10-17 18:26                             ` Vishal Annapurve
2024-10-17 14:56                   ` David Hildenbrand
2024-10-17 15:02               ` David Hildenbrand
2024-10-16  8:50           ` David Hildenbrand
2024-10-16 10:48             ` Vishal Annapurve
2024-10-16 11:54               ` David Hildenbrand
2024-10-16 11:57                 ` Jason Gunthorpe
2025-02-25 20:37   ` Peter Xu
2025-04-23 22:07     ` Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 27/39] KVM: guest_memfd: Allow mmapping guest_memfd files Ackerley Tng
2025-01-20 22:42   ` Peter Xu
2025-04-23 20:25     ` Ackerley Tng
2025-03-04 23:24   ` Peter Xu
2025-04-02  4:07   ` Yan Zhao
2025-04-23 20:28     ` Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 28/39] KVM: guest_memfd: Use vm_type to determine default faultability Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 29/39] KVM: Handle conversions in the SET_MEMORY_ATTRIBUTES ioctl Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 30/39] KVM: guest_memfd: Handle folio preparation for guest_memfd mmap Ackerley Tng
2024-09-16 20:00   ` Elliot Berman
2024-10-03 21:32     ` Ackerley Tng
2024-10-03 23:43       ` Ackerley Tng
2024-10-08 19:30         ` Sean Christopherson
2024-10-07 15:56       ` Patrick Roy
2024-10-08 18:07         ` Ackerley Tng
2024-10-08 19:56           ` Sean Christopherson
2024-10-09  3:51             ` Manwaring, Derek
2024-10-09 13:52               ` Andrew Cooper
2024-10-10 16:21             ` Patrick Roy
2024-10-10 19:27               ` Manwaring, Derek
2024-10-17 23:16               ` Ackerley Tng
2024-10-18  7:10                 ` Patrick Roy
2024-09-10 23:44 ` [RFC PATCH 31/39] KVM: selftests: Allow vm_set_memory_attributes to be used without asserting return value of 0 Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 32/39] KVM: selftests: Test using guest_memfd memory from userspace Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 33/39] KVM: selftests: Test guest_memfd memory sharing between guest and host Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 34/39] KVM: selftests: Add notes in private_mem_kvm_exits_test for mmap-able guest_memfd Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 35/39] KVM: selftests: Test that pinned pages block KVM from setting memory attributes to PRIVATE Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 36/39] KVM: selftests: Refactor vm_mem_add to be more flexible Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 37/39] KVM: selftests: Add helper to perform madvise by memslots Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 38/39] KVM: selftests: Update private_mem_conversions_test for mmap()able guest_memfd Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 39/39] KVM: guest_memfd: Dynamically split/reconstruct HugeTLB page Ackerley Tng
2025-04-03 12:33   ` Yan Zhao
2025-04-23 22:02     ` Ackerley Tng
2025-04-24  1:09       ` Yan Zhao
2025-04-24  4:25         ` Yan Zhao
2025-04-24  5:55           ` Chenyi Qiang
2025-04-24  8:13             ` Yan Zhao
2025-04-24 14:10               ` Vishal Annapurve
2025-04-24 18:15                 ` Ackerley Tng
2025-04-25  4:02                   ` Yan Zhao
2025-04-25 22:45                     ` Ackerley Tng
2025-04-28  1:05                       ` Yan Zhao
2025-04-28 19:02                         ` Vishal Annapurve
2025-04-30 20:09                         ` Ackerley Tng
2025-05-06  1:23                           ` Yan Zhao
2025-05-06 19:22                             ` Ackerley Tng
2025-05-07  3:15                               ` Yan Zhao
2025-05-13 17:33                                 ` Ackerley Tng
2024-09-11  6:56 ` Michal Hocko [this message]
2024-09-14  1:08 ` [RFC PATCH 00/39] 1G page support for guest_memfd Du, Fan
2024-09-14 13:34   ` Vishal Annapurve
2025-01-28  9:42 ` Amit Shah
2025-02-03  8:35   ` Ackerley Tng
2025-02-06 11:07     ` Amit Shah
2025-02-07  6:25       ` Ackerley Tng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZuE_Haaz8M6ENprE@tiehlicka \
    --to=mhocko@suse.com \
    --cc=OSalvador@suse.com \
    --cc=ackerleytng@google.com \
    --cc=ajones@ventanamicro.com \
    --cc=anup@brainfault.org \
    --cc=bfoster@redhat.com \
    --cc=brauner@kernel.org \
    --cc=david@redhat.com \
    --cc=erdemaktas@google.com \
    --cc=fan.du@intel.com \
    --cc=fvdl@google.com \
    --cc=haibo1.xu@intel.com \
    --cc=isaku.yamahata@intel.com \
    --cc=jgg@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=jthoughton@google.com \
    --cc=jun.miao@intel.com \
    --cc=kent.overstreet@linux.dev \
    --cc=kvm@vger.kernel.org \
    --cc=linux-fsdevel@kvack.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=maciej.wieczor-retman@intel.com \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    --cc=oliver.upton@linux.dev \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pgonda@google.com \
    --cc=pvorel@suse.cz \
    --cc=qperret@google.com \
    --cc=quic_eberman@quicinc.com \
    --cc=richard.weiyang@gmail.com \
    --cc=rientjes@google.com \
    --cc=roypat@amazon.co.uk \
    --cc=rppt@kernel.org \
    --cc=seanjc@google.com \
    --cc=shuah@kernel.org \
    --cc=tabba@google.com \
    --cc=vannapurve@google.com \
    --cc=vkuznets@redhat.com \
    --cc=willy@infradead.org \
    --cc=zhiquan1.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).