From: Ackerley Tng <ackerleytng@google.com>
To: kvm@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, x86@kernel.org,
linux-fsdevel@vger.kernel.org
Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com,
akpm@linux-foundation.org, amoorthy@google.com,
anthony.yznaga@oracle.com, anup@brainfault.org,
aou@eecs.berkeley.edu, bfoster@redhat.com,
binbin.wu@linux.intel.com, brauner@kernel.org,
catalin.marinas@arm.com, chao.p.peng@intel.com,
chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com,
dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com,
fan.du@intel.com, fvdl@google.com, graf@amazon.com,
haibo1.xu@intel.com, hch@infradead.org, hughd@google.com,
ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz,
james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca,
jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de,
jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com,
keirf@google.com, kent.overstreet@linux.dev,
kirill.shutemov@intel.com, liam.merwick@oracle.com,
maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name,
maz@kernel.org, mic@digikod.net, michael.roth@amd.com,
mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com,
nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com,
pankaj.gupta@amd.com, paul.walmsley@sifive.com,
pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com,
pgonda@google.com, pvorel@suse.cz, qperret@google.com,
quic_cvanscha@quicinc.com, quic_eberman@quicinc.com,
quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com,
quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com,
quic_tsoni@quicinc.com, richard.weiyang@gmail.com,
rick.p.edgecombe@intel.com, rientjes@google.com,
roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com,
shuah@kernel.org, steven.price@arm.com,
steven.sistare@oracle.com, suzuki.poulose@arm.com,
tabba@google.com, thomas.lendacky@amd.com,
usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz,
viro@zeniv.linux.org.uk, vkuznets@redhat.com,
wei.w.wang@intel.com, will@kernel.org, willy@infradead.org,
xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com,
yuzenghui@huawei.com, zhiquan1.li@intel.com
Subject: [RFC PATCH v2 35/51] mm: guestmem_hugetlb: Add support for splitting and merging pages
Date: Wed, 14 May 2025 16:42:14 -0700 [thread overview]
Message-ID: <2ae41e0d80339da2b57011622ac2288fed65cd01.1747264138.git.ackerleytng@google.com> (raw)
In-Reply-To: <cover.1747264138.git.ackerleytng@google.com>
These functions allow guest_memfd to split and merge HugeTLB pages,
and clean them up on freeing the page.
For merging and splitting pages on conversion, guestmem_hugetlb
expects the refcount on the pages to already be 0. The caller must
ensure that.
For conversions, guest_memfd ensures that the refcounts are already 0
by checking that there are no unexpected refcounts, and then freezing
the expected refcounts away. On unexpected refcounts, guest_memfd will
return an error to userspace.
For truncation, on unexpected refcounts, guest_memfd will return an
error to userspace.
For truncation on closing, guest_memfd will just remove its own
refcounts (the filemap refcounts) and mark split pages with
PGTY_guestmem_hugetlb.
The presence of PGTY_guestmem_hugetlb will trigger the folio_put()
callback to handle further cleanup. This cleanup process will merge
pages (with refcount 0, since cleanup is triggered from folio_put())
before returning the pages to HugeTLB.
Since the merging process is long, it is deferred to a worker thread
since folio_put() could be called from atomic context.
Change-Id: Ib04a3236f1e7250fd9af827630c334d40fb09d40
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Co-developed-by: Vishal Annapurve <vannapurve@google.com>
Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
include/linux/guestmem.h | 3 +
mm/guestmem_hugetlb.c | 349 ++++++++++++++++++++++++++++++++++++++-
2 files changed, 347 insertions(+), 5 deletions(-)
diff --git a/include/linux/guestmem.h b/include/linux/guestmem.h
index 4b2d820274d9..3ee816d1dd34 100644
--- a/include/linux/guestmem.h
+++ b/include/linux/guestmem.h
@@ -8,6 +8,9 @@ struct guestmem_allocator_operations {
void *(*inode_setup)(size_t size, u64 flags);
void (*inode_teardown)(void *private, size_t inode_size);
struct folio *(*alloc_folio)(void *private);
+ int (*split_folio)(struct folio *folio);
+ void (*merge_folio)(struct folio *folio);
+ void (*free_folio)(struct folio *folio);
/*
* Returns the number of PAGE_SIZE pages in a page that this guestmem
* allocator provides.
diff --git a/mm/guestmem_hugetlb.c b/mm/guestmem_hugetlb.c
index ec5a188ca2a7..8727598cf18e 100644
--- a/mm/guestmem_hugetlb.c
+++ b/mm/guestmem_hugetlb.c
@@ -11,15 +11,12 @@
#include <linux/mm.h>
#include <linux/mm_types.h>
#include <linux/pagemap.h>
+#include <linux/xarray.h>
#include <uapi/linux/guestmem.h>
#include "guestmem_hugetlb.h"
-
-void guestmem_hugetlb_handle_folio_put(struct folio *folio)
-{
- WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
-}
+#include "hugetlb_vmemmap.h"
struct guestmem_hugetlb_private {
struct hstate *h;
@@ -34,6 +31,339 @@ static size_t guestmem_hugetlb_nr_pages_in_folio(void *priv)
return pages_per_huge_page(private->h);
}
+static DEFINE_XARRAY(guestmem_hugetlb_stash);
+
+struct guestmem_hugetlb_metadata {
+ void *_hugetlb_subpool;
+ void *_hugetlb_cgroup;
+ void *_hugetlb_hwpoison;
+ void *private;
+};
+
+struct guestmem_hugetlb_stash_item {
+ struct guestmem_hugetlb_metadata hugetlb_metadata;
+ /* hstate tracks the original size of this folio. */
+ struct hstate *h;
+ /* Count of split pages, individually freed, waiting to be merged. */
+ atomic_t nr_pages_waiting_to_be_merged;
+};
+
+struct workqueue_struct *guestmem_hugetlb_wq __ro_after_init;
+static struct work_struct guestmem_hugetlb_cleanup_work;
+static LLIST_HEAD(guestmem_hugetlb_cleanup_list);
+
+static inline void guestmem_hugetlb_register_folio_put_callback(struct folio *folio)
+{
+ __folio_set_guestmem_hugetlb(folio);
+}
+
+static inline void guestmem_hugetlb_unregister_folio_put_callback(struct folio *folio)
+{
+ __folio_clear_guestmem_hugetlb(folio);
+}
+
+static inline void guestmem_hugetlb_defer_cleanup(struct folio *folio)
+{
+ struct llist_node *node;
+
+ /*
+ * Reuse the folio->mapping pointer as a struct llist_node, since
+ * folio->mapping is NULL at this point.
+ */
+ BUILD_BUG_ON(sizeof(folio->mapping) != sizeof(struct llist_node));
+ node = (struct llist_node *)&folio->mapping;
+
+ /*
+ * Only schedule work if list is previously empty. Otherwise,
+ * schedule_work() had been called but the workfn hasn't retrieved the
+ * list yet.
+ */
+ if (llist_add(node, &guestmem_hugetlb_cleanup_list))
+ queue_work(guestmem_hugetlb_wq, &guestmem_hugetlb_cleanup_work);
+}
+
+void guestmem_hugetlb_handle_folio_put(struct folio *folio)
+{
+ guestmem_hugetlb_unregister_folio_put_callback(folio);
+
+ /*
+ * folio_put() can be called in interrupt context, hence do the work
+ * outside of interrupt context
+ */
+ guestmem_hugetlb_defer_cleanup(folio);
+}
+
+/*
+ * Stash existing hugetlb metadata. Use this function just before splitting a
+ * hugetlb page.
+ */
+static inline void
+__guestmem_hugetlb_stash_metadata(struct guestmem_hugetlb_metadata *metadata,
+ struct folio *folio)
+{
+ /*
+ * (folio->page + 1) doesn't have to be stashed since those fields are
+ * known on split/reconstruct and will be reinitialized anyway.
+ */
+
+ /*
+ * subpool is created for every guest_memfd inode, but the folios will
+ * outlive the inode, hence we store the subpool here.
+ */
+ metadata->_hugetlb_subpool = folio->_hugetlb_subpool;
+ /*
+ * _hugetlb_cgroup has to be stored for freeing
+ * later. _hugetlb_cgroup_rsvd does not, since it is NULL for
+ * guest_memfd folios anyway. guest_memfd reservations are handled in
+ * the inode.
+ */
+ metadata->_hugetlb_cgroup = folio->_hugetlb_cgroup;
+ metadata->_hugetlb_hwpoison = folio->_hugetlb_hwpoison;
+
+ /*
+ * HugeTLB flags are stored in folio->private. stash so that ->private
+ * can be used by core-mm.
+ */
+ metadata->private = folio->private;
+}
+
+static int guestmem_hugetlb_stash_metadata(struct folio *folio)
+{
+ XA_STATE(xas, &guestmem_hugetlb_stash, 0);
+ struct guestmem_hugetlb_stash_item *stash;
+ void *entry;
+
+ stash = kzalloc(sizeof(*stash), 1);
+ if (!stash)
+ return -ENOMEM;
+
+ stash->h = folio_hstate(folio);
+ __guestmem_hugetlb_stash_metadata(&stash->hugetlb_metadata, folio);
+
+ xas_set_order(&xas, folio_pfn(folio), folio_order(folio));
+
+ xas_lock(&xas);
+ entry = xas_store(&xas, stash);
+ xas_unlock(&xas);
+
+ if (xa_is_err(entry)) {
+ kfree(stash);
+ return xa_err(entry);
+ }
+
+ return 0;
+}
+
+static inline void
+__guestmem_hugetlb_unstash_metadata(struct guestmem_hugetlb_metadata *metadata,
+ struct folio *folio)
+{
+ folio->_hugetlb_subpool = metadata->_hugetlb_subpool;
+ folio->_hugetlb_cgroup = metadata->_hugetlb_cgroup;
+ folio->_hugetlb_cgroup_rsvd = NULL;
+ folio->_hugetlb_hwpoison = metadata->_hugetlb_hwpoison;
+
+ folio_change_private(folio, metadata->private);
+}
+
+static int guestmem_hugetlb_unstash_free_metadata(struct folio *folio)
+{
+ struct guestmem_hugetlb_stash_item *stash;
+ unsigned long pfn;
+
+ pfn = folio_pfn(folio);
+
+ stash = xa_erase(&guestmem_hugetlb_stash, pfn);
+ __guestmem_hugetlb_unstash_metadata(&stash->hugetlb_metadata, folio);
+
+ kfree(stash);
+
+ return 0;
+}
+
+/**
+ * guestmem_hugetlb_split_folio() - Split a HugeTLB @folio to PAGE_SIZE pages.
+ *
+ * @folio: The folio to be split.
+ *
+ * Context: Before splitting, the folio must have a refcount of 0. After
+ * splitting, each split folio has a refcount of 0.
+ * Return: 0 on success and negative error otherwise.
+ */
+static int guestmem_hugetlb_split_folio(struct folio *folio)
+{
+ long orig_nr_pages;
+ int ret;
+ int i;
+
+ if (folio_size(folio) == PAGE_SIZE)
+ return 0;
+
+ orig_nr_pages = folio_nr_pages(folio);
+ ret = guestmem_hugetlb_stash_metadata(folio);
+ if (ret)
+ return ret;
+
+ /*
+ * hugetlb_vmemmap_restore_folio() has to be called ahead of the rest
+ * because it checks and page type. This doesn't actually split the
+ * folio, so the first few struct pages are still intact.
+ */
+ ret = hugetlb_vmemmap_restore_folio(folio_hstate(folio), folio);
+ if (ret)
+ goto err;
+
+ /*
+ * Can clear without lock because this will not race with the folio
+ * being mapped. folio's page type is overlaid with mapcount and so in
+ * other cases it's necessary to take hugetlb_lock to prevent races with
+ * mapcount increasing.
+ */
+ __folio_clear_hugetlb(folio);
+
+ /*
+ * Remove the first folio from h->hugepage_activelist since it is no
+ * longer a HugeTLB page. The other split pages should not be on any
+ * lists.
+ */
+ hugetlb_folio_list_del(folio);
+
+ /* Actually split page by undoing prep_compound_page() */
+ __folio_clear_head(folio);
+
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+ /*
+ * Zero out _nr_pages, otherwise this overlaps with memcg_data,
+ * resulting in lookups on false memcg_data. _nr_pages doesn't have to
+ * be set to 1 because folio_nr_pages() relies on the presence of the
+ * head flag to return 1 for nr_pages.
+ */
+ folio->_nr_pages = 0;
+#endif
+
+ for (i = 1; i < orig_nr_pages; ++i) {
+ struct page *p = folio_page(folio, i);
+
+ /* Copy flags from the first page to split pages. */
+ p->flags = folio->flags;
+
+ p->mapping = NULL;
+ clear_compound_head(p);
+ }
+
+ return 0;
+
+err:
+ guestmem_hugetlb_unstash_free_metadata(folio);
+
+ return ret;
+}
+
+/**
+ * guestmem_hugetlb_merge_folio() - Merge a HugeTLB folio from the folio
+ * beginning @first_folio.
+ *
+ * @first_folio: the first folio in a contiguous block of folios to be merged.
+ *
+ * The size of the contiguous block is tracked in guestmem_hugetlb_stash.
+ *
+ * Context: The first folio is checked to have a refcount of 0 before
+ * reconstruction. After reconstruction, the reconstructed folio has a
+ * refcount of 0.
+ */
+static void guestmem_hugetlb_merge_folio(struct folio *first_folio)
+{
+ struct guestmem_hugetlb_stash_item *stash;
+ struct hstate *h;
+
+ stash = xa_load(&guestmem_hugetlb_stash, folio_pfn(first_folio));
+ h = stash->h;
+
+ /*
+ * This is the step that does the merge. prep_compound_page() will write
+ * to pages 1 and 2 as well, so guestmem_unstash_hugetlb_metadata() has
+ * to come after this.
+ */
+ prep_compound_page(&first_folio->page, huge_page_order(h));
+
+ WARN_ON(guestmem_hugetlb_unstash_free_metadata(first_folio));
+
+ /*
+ * prep_compound_page() will set up mapping on tail pages. For
+ * completeness, clear mapping on head page.
+ */
+ first_folio->mapping = NULL;
+
+ __folio_set_hugetlb(first_folio);
+
+ hugetlb_folio_list_add(first_folio, &h->hugepage_activelist);
+
+ hugetlb_vmemmap_optimize_folio(h, first_folio);
+}
+
+static struct folio *guestmem_hugetlb_maybe_merge_folio(struct folio *folio)
+{
+ struct guestmem_hugetlb_stash_item *stash;
+ unsigned long first_folio_pfn;
+ struct folio *first_folio;
+ unsigned long pfn;
+ size_t nr_pages;
+
+ pfn = folio_pfn(folio);
+
+ stash = xa_load(&guestmem_hugetlb_stash, pfn);
+ nr_pages = pages_per_huge_page(stash->h);
+ if (atomic_inc_return(&stash->nr_pages_waiting_to_be_merged) < nr_pages)
+ return NULL;
+
+ first_folio_pfn = round_down(pfn, nr_pages);
+ first_folio = pfn_folio(first_folio_pfn);
+
+ guestmem_hugetlb_merge_folio(first_folio);
+
+ return first_folio;
+}
+
+static void guestmem_hugetlb_cleanup_folio(struct folio *folio)
+{
+ struct folio *merged_folio;
+
+ merged_folio = guestmem_hugetlb_maybe_merge_folio(folio);
+ if (merged_folio)
+ __folio_put(merged_folio);
+}
+
+static void guestmem_hugetlb_cleanup_workfn(struct work_struct *work)
+{
+ struct llist_node *node;
+
+ node = llist_del_all(&guestmem_hugetlb_cleanup_list);
+ while (node) {
+ struct folio *folio;
+
+ folio = container_of((struct address_space **)node,
+ struct folio, mapping);
+
+ node = node->next;
+ folio->mapping = NULL;
+
+ guestmem_hugetlb_cleanup_folio(folio);
+ }
+}
+
+static int __init guestmem_hugetlb_init(void)
+{
+ INIT_WORK(&guestmem_hugetlb_cleanup_work, guestmem_hugetlb_cleanup_workfn);
+
+ guestmem_hugetlb_wq = alloc_workqueue("guestmem_hugetlb",
+ WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
+ if (!guestmem_hugetlb_wq)
+ return -ENOMEM;
+
+ return 0;
+}
+subsys_initcall(guestmem_hugetlb_init);
+
static void *guestmem_hugetlb_setup(size_t size, u64 flags)
{
@@ -164,10 +494,19 @@ static struct folio *guestmem_hugetlb_alloc_folio(void *priv)
return ERR_PTR(-ENOMEM);
}
+static void guestmem_hugetlb_free_folio(struct folio *folio)
+{
+ if (xa_load(&guestmem_hugetlb_stash, folio_pfn(folio)))
+ guestmem_hugetlb_register_folio_put_callback(folio);
+}
+
const struct guestmem_allocator_operations guestmem_hugetlb_ops = {
.inode_setup = guestmem_hugetlb_setup,
.inode_teardown = guestmem_hugetlb_teardown,
.alloc_folio = guestmem_hugetlb_alloc_folio,
+ .split_folio = guestmem_hugetlb_split_folio,
+ .merge_folio = guestmem_hugetlb_merge_folio,
+ .free_folio = guestmem_hugetlb_free_folio,
.nr_pages_in_folio = guestmem_hugetlb_nr_pages_in_folio,
};
EXPORT_SYMBOL_GPL(guestmem_hugetlb_ops);
--
2.49.0.1045.g170613ef41-goog
next prev parent reply other threads:[~2025-05-14 23:43 UTC|newest]
Thread overview: 231+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-14 23:41 [RFC PATCH v2 00/51] 1G page support for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 01/51] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 02/51] KVM: guest_memfd: Introduce and use shareability to guard faulting Ackerley Tng
2025-05-27 3:54 ` Yan Zhao
2025-05-29 18:20 ` Ackerley Tng
2025-05-30 8:53 ` Fuad Tabba
2025-05-30 18:32 ` Ackerley Tng
2025-06-02 9:43 ` Fuad Tabba
2025-05-27 8:25 ` Binbin Wu
2025-05-27 8:43 ` Binbin Wu
2025-05-29 18:26 ` Ackerley Tng
2025-05-29 20:37 ` Ackerley Tng
2025-05-29 5:42 ` Michael Roth
2025-06-11 21:51 ` Ackerley Tng
2025-07-02 23:25 ` Michael Roth
2025-07-03 0:46 ` Vishal Annapurve
2025-07-03 0:52 ` Vishal Annapurve
2025-07-03 4:12 ` Michael Roth
2025-07-03 5:10 ` Vishal Annapurve
2025-07-03 20:39 ` Michael Roth
2025-07-07 14:55 ` Vishal Annapurve
2025-07-12 0:10 ` Michael Roth
2025-07-12 17:53 ` Vishal Annapurve
2025-08-12 8:23 ` Fuad Tabba
2025-08-13 17:11 ` Ira Weiny
2025-06-11 22:10 ` Ackerley Tng
2025-08-01 0:01 ` Yan Zhao
2025-08-14 21:35 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 03/51] KVM: selftests: Update guest_memfd_test for INIT_PRIVATE flag Ackerley Tng
2025-05-15 13:49 ` Ira Weiny
2025-05-16 17:42 ` Ackerley Tng
2025-05-16 19:31 ` Ira Weiny
2025-05-27 8:53 ` Binbin Wu
2025-05-30 19:59 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 04/51] KVM: guest_memfd: Introduce KVM_GMEM_CONVERT_SHARED/PRIVATE ioctls Ackerley Tng
2025-05-15 14:50 ` Ira Weiny
2025-05-16 17:53 ` Ackerley Tng
2025-05-20 9:22 ` Fuad Tabba
2025-05-20 13:02 ` Vishal Annapurve
2025-05-20 13:44 ` Fuad Tabba
2025-05-20 14:11 ` Vishal Annapurve
2025-05-20 14:33 ` Fuad Tabba
2025-05-20 16:02 ` Vishal Annapurve
2025-05-20 18:05 ` Fuad Tabba
2025-05-20 19:40 ` Ackerley Tng
2025-05-21 12:36 ` Fuad Tabba
2025-05-21 14:42 ` Vishal Annapurve
2025-05-21 15:21 ` Fuad Tabba
2025-05-21 15:51 ` Vishal Annapurve
2025-05-21 18:27 ` Fuad Tabba
2025-05-22 14:52 ` Sean Christopherson
2025-05-22 15:07 ` Fuad Tabba
2025-05-22 16:26 ` Sean Christopherson
2025-05-23 10:12 ` Fuad Tabba
2025-06-24 8:23 ` Alexey Kardashevskiy
2025-06-24 13:08 ` Jason Gunthorpe
2025-06-24 14:10 ` Vishal Annapurve
2025-06-27 4:49 ` Alexey Kardashevskiy
2025-06-27 15:17 ` Vishal Annapurve
2025-06-30 0:19 ` Alexey Kardashevskiy
2025-06-30 14:19 ` Vishal Annapurve
2025-07-10 6:57 ` Alexey Kardashevskiy
2025-07-10 17:58 ` Jason Gunthorpe
2025-07-02 8:35 ` Yan Zhao
2025-07-02 13:54 ` Vishal Annapurve
2025-07-02 14:13 ` Jason Gunthorpe
2025-07-02 14:32 ` Vishal Annapurve
2025-07-10 10:50 ` Xu Yilun
2025-07-10 17:54 ` Jason Gunthorpe
2025-07-11 4:31 ` Xu Yilun
2025-07-11 9:33 ` Xu Yilun
2025-07-16 22:22 ` Ackerley Tng
2025-07-17 9:32 ` Xu Yilun
2025-07-17 16:56 ` Ackerley Tng
2025-07-18 2:48 ` Xu Yilun
2025-07-18 14:15 ` Jason Gunthorpe
2025-07-21 14:18 ` Xu Yilun
2025-07-18 15:13 ` Ira Weiny
2025-07-21 9:58 ` Xu Yilun
2025-07-22 18:17 ` Ackerley Tng
2025-07-22 19:25 ` Edgecombe, Rick P
2025-05-28 3:16 ` Binbin Wu
2025-05-30 20:10 ` Ackerley Tng
2025-06-03 0:54 ` Binbin Wu
2025-05-14 23:41 ` [RFC PATCH v2 05/51] KVM: guest_memfd: Skip LRU for guest_memfd folios Ackerley Tng
2025-05-28 7:01 ` Binbin Wu
2025-05-30 20:32 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 06/51] KVM: Query guest_memfd for private/shared status Ackerley Tng
2025-05-27 3:55 ` Yan Zhao
2025-05-28 8:08 ` Binbin Wu
2025-05-28 9:55 ` Yan Zhao
2025-05-14 23:41 ` [RFC PATCH v2 07/51] KVM: guest_memfd: Add CAP KVM_CAP_GMEM_CONVERSION Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 08/51] KVM: selftests: Test flag validity after guest_memfd supports conversions Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 09/51] KVM: selftests: Test faulting with respect to GUEST_MEMFD_FLAG_INIT_PRIVATE Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 10/51] KVM: selftests: Refactor vm_mem_add to be more flexible Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 11/51] KVM: selftests: Allow cleanup of ucall_pool from host Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 12/51] KVM: selftests: Test conversion flows for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 13/51] KVM: selftests: Add script to exercise private_mem_conversions_test Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 14/51] KVM: selftests: Update private_mem_conversions_test to mmap guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 15/51] KVM: selftests: Update script to map shared memory from guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 16/51] mm: hugetlb: Consolidate interpretation of gbl_chg within alloc_hugetlb_folio() Ackerley Tng
2025-05-15 2:09 ` Matthew Wilcox
2025-05-28 8:55 ` Binbin Wu
2025-07-07 18:27 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 17/51] mm: hugetlb: Cleanup interpretation of gbl_chg in alloc_hugetlb_folio() Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 18/51] mm: hugetlb: Cleanup interpretation of map_chg_state within alloc_hugetlb_folio() Ackerley Tng
2025-07-07 18:08 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 19/51] mm: hugetlb: Rename alloc_surplus_hugetlb_folio Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 20/51] mm: mempolicy: Refactor out policy_node_nodemask() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 21/51] mm: hugetlb: Inline huge_node() into callers Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 22/51] mm: hugetlb: Refactor hugetlb allocation functions Ackerley Tng
2025-05-31 23:45 ` Ira Weiny
2025-06-13 22:03 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 23/51] mm: hugetlb: Refactor out hugetlb_alloc_folio() Ackerley Tng
2025-06-01 0:38 ` Ira Weiny
2025-06-13 22:07 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 24/51] mm: hugetlb: Add option to create new subpool without using surplus Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 25/51] mm: truncate: Expose preparation steps for truncate_inode_pages_final Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 26/51] mm: Consolidate freeing of typed folios on final folio_put() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 27/51] mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 28/51] mm: Introduce guestmem_hugetlb to support folio_put() handling of guestmem pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 29/51] mm: guestmem_hugetlb: Wrap HugeTLB as an allocator for guest_memfd Ackerley Tng
2025-05-16 14:07 ` Ackerley Tng
2025-05-16 20:33 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 30/51] mm: truncate: Expose truncate_inode_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 31/51] KVM: x86: Set disallow_lpage on base_gfn and guest_memfd pgoff misalignment Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 32/51] KVM: guest_memfd: Support guestmem_hugetlb as custom allocator Ackerley Tng
2025-05-23 10:47 ` Yan Zhao
2025-08-12 9:13 ` Tony Lindgren
2025-05-14 23:42 ` [RFC PATCH v2 33/51] KVM: guest_memfd: Allocate and truncate from " Ackerley Tng
2025-05-21 18:05 ` Vishal Annapurve
2025-05-22 23:12 ` Edgecombe, Rick P
2025-05-28 10:58 ` Yan Zhao
2025-06-03 7:43 ` Binbin Wu
2025-07-16 22:13 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 34/51] mm: hugetlb: Add functions to add/delete folio from hugetlb lists Ackerley Tng
2025-05-14 23:42 ` Ackerley Tng [this message]
2025-05-14 23:42 ` [RFC PATCH v2 36/51] mm: Convert split_folio() macro to function Ackerley Tng
2025-05-21 16:40 ` Edgecombe, Rick P
2025-05-14 23:42 ` [RFC PATCH v2 37/51] filemap: Pass address_space mapping to ->free_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 38/51] KVM: guest_memfd: Split allocator pages for guest_memfd use Ackerley Tng
2025-05-22 22:19 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:53 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-05-27 4:30 ` Yan Zhao
2025-05-27 4:38 ` Yan Zhao
2025-06-05 17:50 ` Ackerley Tng
2025-05-27 8:45 ` Yan Zhao
2025-06-05 19:10 ` Ackerley Tng
2025-06-16 11:15 ` Yan Zhao
2025-06-05 5:24 ` Binbin Wu
2025-06-05 19:16 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 39/51] KVM: guest_memfd: Merge and truncate on fallocate(PUNCH_HOLE) Ackerley Tng
2025-05-28 11:00 ` Yan Zhao
2025-05-28 16:39 ` Ackerley Tng
2025-05-29 3:26 ` Yan Zhao
2025-05-14 23:42 ` [RFC PATCH v2 40/51] KVM: guest_memfd: Update kvm_gmem_mapping_order to account for page status Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 41/51] KVM: Add CAP to indicate support for HugeTLB as custom allocator Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 42/51] KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 43/51] KVM: selftests: Update conversion flows test for HugeTLB Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 44/51] KVM: selftests: Test truncation paths of guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 45/51] KVM: selftests: Test allocation and conversion of subfolios Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 46/51] KVM: selftests: Test that guest_memfd usage is reported via hugetlb Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 47/51] KVM: selftests: Support various types of backing sources for private memory Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 48/51] KVM: selftests: Update test for various private memory backing source types Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 49/51] KVM: selftests: Update private_mem_conversions_test.sh to test with HugeTLB pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 50/51] KVM: selftests: Add script to test HugeTLB statistics Ackerley Tng
2025-05-15 18:03 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Edgecombe, Rick P
2025-05-15 18:42 ` Vishal Annapurve
2025-05-15 23:35 ` Edgecombe, Rick P
2025-05-16 0:57 ` Sean Christopherson
2025-05-16 2:12 ` Edgecombe, Rick P
2025-05-16 13:11 ` Vishal Annapurve
2025-05-16 16:45 ` Edgecombe, Rick P
2025-05-16 17:51 ` Sean Christopherson
2025-05-16 19:14 ` Edgecombe, Rick P
2025-05-16 20:25 ` Dave Hansen
2025-05-16 21:42 ` Edgecombe, Rick P
2025-05-16 17:45 ` Sean Christopherson
2025-05-16 13:09 ` Jason Gunthorpe
2025-05-16 17:04 ` Edgecombe, Rick P
2025-05-16 0:22 ` [RFC PATCH v2 51/51] KVM: selftests: Test guest_memfd for accuracy of st_blocks Ackerley Tng
2025-05-16 19:48 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Ira Weiny
2025-05-16 19:59 ` Ira Weiny
2025-05-16 20:26 ` Ackerley Tng
2025-05-16 22:43 ` Ackerley Tng
2025-06-19 8:13 ` Yan Zhao
2025-06-19 8:59 ` Xiaoyao Li
2025-06-19 9:18 ` Xiaoyao Li
2025-06-19 9:28 ` Yan Zhao
2025-06-19 9:45 ` Xiaoyao Li
2025-06-19 9:49 ` Xiaoyao Li
2025-06-29 18:28 ` Vishal Annapurve
2025-06-30 3:14 ` Yan Zhao
2025-06-30 14:14 ` Vishal Annapurve
2025-07-01 5:23 ` Yan Zhao
2025-07-01 19:48 ` Vishal Annapurve
2025-07-07 23:25 ` Sean Christopherson
2025-07-08 0:14 ` Vishal Annapurve
2025-07-08 1:08 ` Edgecombe, Rick P
2025-07-08 14:20 ` Sean Christopherson
2025-07-08 14:52 ` Edgecombe, Rick P
2025-07-08 15:07 ` Vishal Annapurve
2025-07-08 15:31 ` Edgecombe, Rick P
2025-07-08 17:16 ` Vishal Annapurve
2025-07-08 17:39 ` Edgecombe, Rick P
2025-07-08 18:03 ` Sean Christopherson
2025-07-08 18:13 ` Edgecombe, Rick P
2025-07-08 18:55 ` Sean Christopherson
2025-07-08 21:23 ` Edgecombe, Rick P
2025-07-09 14:28 ` Vishal Annapurve
2025-07-09 15:00 ` Sean Christopherson
2025-07-10 1:30 ` Vishal Annapurve
2025-07-10 23:33 ` Sean Christopherson
2025-07-11 21:18 ` Vishal Annapurve
2025-07-12 17:33 ` Vishal Annapurve
2025-07-09 15:17 ` Edgecombe, Rick P
2025-07-10 3:39 ` Vishal Annapurve
2025-07-08 19:28 ` Vishal Annapurve
2025-07-08 19:58 ` Sean Christopherson
2025-07-08 22:54 ` Vishal Annapurve
2025-07-08 15:38 ` Sean Christopherson
2025-07-08 16:22 ` Fuad Tabba
2025-07-08 17:25 ` Sean Christopherson
2025-07-08 18:37 ` Fuad Tabba
2025-07-16 23:06 ` Ackerley Tng
2025-06-26 23:19 ` Ackerley Tng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2ae41e0d80339da2b57011622ac2288fed65cd01.1747264138.git.ackerleytng@google.com \
--to=ackerleytng@google.com \
--cc=aik@amd.com \
--cc=ajones@ventanamicro.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anthony.yznaga@oracle.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bfoster@redhat.com \
--cc=binbin.wu@linux.intel.com \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=chao.p.peng@intel.com \
--cc=chenhuacai@kernel.org \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=dwmw@amazon.co.uk \
--cc=erdemaktas@google.com \
--cc=fan.du@intel.com \
--cc=fvdl@google.com \
--cc=graf@amazon.com \
--cc=haibo1.xu@intel.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jack@suse.cz \
--cc=james.morse@arm.com \
--cc=jarkko@kernel.org \
--cc=jgg@ziepe.ca \
--cc=jgowans@amazon.com \
--cc=jhubbard@nvidia.com \
--cc=jroedel@suse.de \
--cc=jthoughton@google.com \
--cc=jun.miao@intel.com \
--cc=kai.huang@intel.com \
--cc=keirf@google.com \
--cc=kent.overstreet@linux.dev \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=liam.merwick@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maciej.wieczor-retman@intel.com \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=michael.roth@amd.com \
--cc=mpe@ellerman.id.au \
--cc=muchun.song@linux.dev \
--cc=nikunj@amd.com \
--cc=nsaenz@amazon.es \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=pankaj.gupta@amd.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=pdurrant@amazon.co.uk \
--cc=peterx@redhat.com \
--cc=pgonda@google.com \
--cc=pvorel@suse.cz \
--cc=qperret@google.com \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_eberman@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_svaddagi@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=richard.weiyang@gmail.com \
--cc=rick.p.edgecombe@intel.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=steven.price@arm.com \
--cc=steven.sistare@oracle.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=usama.arif@bytedance.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=vkuznets@redhat.com \
--cc=wei.w.wang@intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yilun.xu@intel.com \
--cc=yuzenghui@huawei.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).