From: Ackerley Tng via B4 Relay <devnull+ackerleytng.google.com@kernel.org>
To: Muchun Song <muchun.song@linux.dev>,
Oscar Salvador <osalvador@suse.de>,
David Hildenbrand <david@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
fvdl@google.com, jiaqiyan@google.com, joshua.hahnjy@gmail.com,
jthoughton@google.com, mhocko@kernel.org, michael.roth@amd.com,
pasha.tatashin@soleen.com, pbonzini@redhat.com,
peterx@redhat.com, pratyush@kernel.org,
rick.p.edgecombe@intel.com, rientjes@google.com,
roman.gushchin@linux.dev, seanjc@google.com,
shakeel.butt@linux.dev, shivankg@amd.com, vannapurve@google.com,
yan.y.zhao@intel.com, Dan Williams <djbw@kernel.org>,
Jason Gunthorpe <jgg@ziepe.ca>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Ackerley Tng <ackerleytng@google.com>
Subject: [PATCH v2 6/6] mm: hugetlb: Refactor out hugetlb_alloc_folio()
Date: Wed, 06 May 2026 08:54:42 -0700 [thread overview]
Message-ID: <20260506-hugetlb-open-up-v2-6-826a0c5f28fc@google.com> (raw)
In-Reply-To: <20260506-hugetlb-open-up-v2-0-826a0c5f28fc@google.com>
From: Ackerley Tng <ackerleytng@google.com>
Refactor out hugetlb_alloc_folio() from alloc_hugetlb_folio(), which
handles allocation of a folio and memory and HugeTLB charging to cgroups.
This refactoring decouples the HugeTLB page allocation from VMAs,
specifically:
1. Reservations (as in resv_map) are stored in the vma
2. mpol is stored at vma->vm_policy
3. A vma must be used for allocation even if the pages are not meant to be
used by host process.
Without this coupling, VMAs are no longer a requirement for
allocation. This opens up the allocation routine for usage without VMAs,
which will allow guest_memfd to use HugeTLB as a more generic allocator of
huge pages, since guest_memfd memory may not have any associated VMAs by
design. In addition, direct allocations from HugeTLB could possibly be
refactored to avoid the use of a pseudo-VMA.
Also, this decouples HugeTLB page allocation from HugeTLBfs, where the
subpool is stored at the fs mount. This is also a requirement for
guest_memfd, where the plan is to have a subpool created per-fd and stored
on the inode.
No functional change intended.
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
include/linux/hugetlb.h | 3 +
mm/hugetlb.c | 179 ++++++++++++++++++++++++++----------------------
2 files changed, 100 insertions(+), 82 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 93418625d3c5f..ec205d8580885 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -705,6 +705,9 @@ bool hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m);
int isolate_or_dissolve_huge_folio(struct folio *folio, struct list_head *list);
int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);
void wait_for_freed_hugetlb_folios(void);
+struct folio *hugetlb_alloc_folio(struct hstate *h, struct hugepage_subpool *spool,
+ struct mempolicy *mpol, int nid, nodemask_t *nodemask,
+ bool charge_hugetlb_cgroup_rsvd, bool use_global_reservation);
struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
unsigned long addr, bool cow_from_owner);
struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4159b3565a9be..a1c5b94e52e0a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2821,6 +2821,88 @@ void wait_for_freed_hugetlb_folios(void)
flush_work(&free_hpage_work);
}
+struct folio *hugetlb_alloc_folio(struct hstate *h, struct hugepage_subpool *spool,
+ struct mempolicy *mpol, int nid, nodemask_t *nodemask,
+ bool charge_hugetlb_cgroup_rsvd, bool use_global_reservation)
+{
+ size_t nr_pages = pages_per_huge_page(h);
+ struct hugetlb_cgroup *h_cg = NULL;
+ gfp_t gfp = htlb_alloc_mask(h);
+ int idx = hstate_index(h);
+ struct folio *folio;
+ int ret;
+
+ if (charge_hugetlb_cgroup_rsvd &&
+ hugetlb_cgroup_charge_cgroup_rsvd(idx, nr_pages, &h_cg))
+ return ERR_PTR(-ENOSPC);
+
+ if (hugetlb_cgroup_charge_cgroup(idx, nr_pages, &h_cg)) {
+ ret = -ENOSPC;
+ goto err_uncharge_hugetlb_cgroup_rsvd;
+ }
+
+ spin_lock_irq(&hugetlb_lock);
+
+ folio = NULL;
+ if (use_global_reservation || available_huge_pages(h))
+ folio = dequeue_hugetlb_folio_with_mpol(h, mpol, nid, nodemask);
+
+ if (!folio) {
+ spin_unlock_irq(&hugetlb_lock);
+ folio = alloc_buddy_hugetlb_folio_with_mpol(h, mpol, nid, nodemask);
+ if (!folio) {
+ ret = -ENOSPC;
+ goto err_uncharge_hugetlb_cgroup;
+ }
+ spin_lock_irq(&hugetlb_lock);
+ list_add(&folio->lru, &h->hugepage_activelist);
+ folio_ref_unfreeze(folio, 1);
+ /* Fall through */
+ }
+
+ if (use_global_reservation) {
+ folio_set_hugetlb_restore_reserve(folio);
+ h->resv_huge_pages--;
+ }
+
+ hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio);
+
+ if (charge_hugetlb_cgroup_rsvd) {
+ hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h),
+ h_cg, folio);
+ }
+
+ spin_unlock_irq(&hugetlb_lock);
+
+ ret = mem_cgroup_charge_hugetlb(folio, gfp | __GFP_RETRY_MAYFAIL);
+ /*
+ * Unconditionally increment NR_HUGETLB here because if
+ * mem_cgroup_charge_hugetlb failed, freeing the page will
+ * decrement NR_HUGETLB.
+ */
+ lruvec_stat_mod_folio(folio, NR_HUGETLB, pages_per_huge_page(h));
+
+ if (ret == -ENOMEM) {
+ free_huge_folio(folio);
+ /*
+ * Skip uncharging hugetlb_cgroup since the charges
+ * were committed to the folio and freeing the folio
+ * would have cleared those up.
+ */
+ return ERR_PTR(ret);
+ }
+
+ return folio;
+
+ err_uncharge_hugetlb_cgroup:
+ hugetlb_cgroup_uncharge_cgroup(idx, nr_pages, h_cg);
+ err_uncharge_hugetlb_cgroup_rsvd:
+ if (charge_hugetlb_cgroup_rsvd)
+ hugetlb_cgroup_uncharge_cgroup_rsvd(idx, nr_pages, h_cg);
+
+ return ERR_PTR(ret);
+}
+
typedef enum {
/*
* For either 0/1: we checked the per-vma resv map, and one resv
@@ -2856,11 +2938,12 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
long retval, gbl_chg, gbl_reserve;
map_chg_state map_chg;
int ret, idx;
- struct hugetlb_cgroup *h_cg = NULL;
gfp_t gfp = htlb_alloc_mask(h);
struct mempolicy *mpol;
nodemask_t *nodemask;
int nid;
+ bool charge_hugetlb_cgroup_rsvd;
+ bool global_reservation_exists;
idx = hstate_index(h);
@@ -2907,89 +2990,28 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
}
/*
- * If this allocation is not consuming a per-vma reservation,
- * charge the hugetlb cgroup now.
+ * If allocation doesn't reuse a reservation in the resv_map,
+ * charge for the reservation.
*/
- if (map_chg) {
- ret = hugetlb_cgroup_charge_cgroup_rsvd(
- idx, pages_per_huge_page(h), &h_cg);
- if (ret) {
- ret = -ENOSPC;
- goto out_subpool_put;
- }
- }
+ charge_hugetlb_cgroup_rsvd = map_chg != MAP_CHG_REUSE;
- ret = hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), &h_cg);
- if (ret) {
- ret = -ENOSPC;
- goto out_uncharge_cgroup_reservation;
- }
-
- spin_lock_irq(&hugetlb_lock);
+ /*
+ * gbl_chg == 0 indicates a reservation exists for this
+ * allocation, so try to use it.
+ */
+ global_reservation_exists = gbl_chg == 0;
/* Takes reference on mpol. */
nid = huge_node(vma, addr, gfp, &mpol, &nodemask);
- /*
- * gbl_chg == 0 indicates a reservation exists for the allocation - so
- * try dequeuing a page. If there are available_huge_pages(), try using
- * them!
- */
- folio = NULL;
- if (!gbl_chg || available_huge_pages(h))
- folio = dequeue_hugetlb_folio_with_mpol(h, mpol, nid, nodemask);
-
- if (!folio) {
- spin_unlock_irq(&hugetlb_lock);
- folio = alloc_buddy_hugetlb_folio_with_mpol(h, mpol, nid, nodemask);
- if (!folio) {
- mpol_cond_put(mpol);
- ret = -ENOSPC;
- goto out_uncharge_cgroup;
- }
- spin_lock_irq(&hugetlb_lock);
- list_add(&folio->lru, &h->hugepage_activelist);
- folio_ref_unfreeze(folio, 1);
- /* Fall through */
- }
+ folio = hugetlb_alloc_folio(h, spool, mpol, nid, nodemask,
+ charge_hugetlb_cgroup_rsvd,
+ global_reservation_exists);
mpol_cond_put(mpol);
- /*
- * Either dequeued or buddy-allocated folio needs to add special
- * mark to the folio when it consumes a global reservation.
- */
- if (!gbl_chg) {
- folio_set_hugetlb_restore_reserve(folio);
- h->resv_huge_pages--;
- }
-
- hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio);
- /* If allocation is not consuming a reservation, also store the
- * hugetlb_cgroup pointer on the page.
- */
- if (map_chg) {
- hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h),
- h_cg, folio);
- }
-
- spin_unlock_irq(&hugetlb_lock);
-
- ret = mem_cgroup_charge_hugetlb(folio, gfp | __GFP_RETRY_MAYFAIL);
- /*
- * Unconditionally increment NR_HUGETLB here. If it turns out that
- * mem_cgroup_charge_hugetlb failed, then immediately free the page and
- * decrement NR_HUGETLB.
- */
- lruvec_stat_mod_folio(folio, NR_HUGETLB, pages_per_huge_page(h));
-
- if (ret == -ENOMEM) {
- free_huge_folio(folio);
- /*
- * Skip uncharging hugetlb_cgroup since the charges
- * were committed to the folio and freeing the folio
- * would have cleared those up.
- */
+ if (IS_ERR(folio)) {
+ ret = PTR_ERR(folio);
goto out_subpool_put;
}
@@ -3022,12 +3044,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
return folio;
-out_uncharge_cgroup:
- hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg);
-out_uncharge_cgroup_reservation:
- if (map_chg)
- hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h),
- h_cg);
out_subpool_put:
/*
* put page to subpool iff the quota of subpool's rsv_hpages is used
@@ -3038,7 +3054,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
hugetlb_acct_memory(h, -gbl_reserve);
}
-
out_end_reservation:
if (map_chg != MAP_CHG_ENFORCED)
vma_end_reservation(h, vma, addr);
--
2.54.0.545.g6539524ca2-goog
next prev parent reply other threads:[~2026-05-06 15:55 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-06 15:54 [PATCH v2 0/6] Open HugeTLB allocation routine for more generic use Ackerley Tng via B4 Relay
2026-05-06 15:54 ` [PATCH v2 1/6] mm: hugetlb: Consolidate interpretation of gbl_chg within alloc_hugetlb_folio() Ackerley Tng via B4 Relay
2026-05-12 9:00 ` Oscar Salvador
2026-05-06 15:54 ` [PATCH v2 2/6] mm: hugetlb: Move mpol interpretation out of alloc_buddy_hugetlb_folio_with_mpol() Ackerley Tng via B4 Relay
2026-05-12 12:51 ` Oscar Salvador
2026-05-06 15:54 ` [PATCH v2 3/6] mm: hugetlb: Move mpol interpretation out of dequeue_hugetlb_folio_vma() Ackerley Tng via B4 Relay
2026-05-12 12:56 ` Oscar Salvador
2026-05-06 15:54 ` [PATCH v2 4/6] mm: hugetlb: Use error variable in alloc_hugetlb_folio Ackerley Tng via B4 Relay
2026-05-06 15:54 ` [PATCH v2 5/6] mm: hugetlb: Move mem_cgroup_charge_hugetlb() earlier in allocation Ackerley Tng via B4 Relay
2026-05-06 15:54 ` Ackerley Tng via B4 Relay [this message]
2026-05-12 13:25 ` [PATCH v2 6/6] mm: hugetlb: Refactor out hugetlb_alloc_folio() Oscar Salvador
2026-05-12 13:17 ` [PATCH v2 0/6] Open HugeTLB allocation routine for more generic use Oscar Salvador
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260506-hugetlb-open-up-v2-6-826a0c5f28fc@google.com \
--to=devnull+ackerleytng.google.com@kernel.org \
--cc=ackerleytng@google.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=djbw@kernel.org \
--cc=fvdl@google.com \
--cc=jgg@ziepe.ca \
--cc=jiaqiyan@google.com \
--cc=joshua.hahnjy@gmail.com \
--cc=jthoughton@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=michael.roth@amd.com \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=pasha.tatashin@soleen.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pratyush@kernel.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=seanjc@google.com \
--cc=shakeel.butt@linux.dev \
--cc=shivankg@amd.com \
--cc=vannapurve@google.com \
--cc=yan.y.zhao@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox