public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [LSF/MM/BPF TOPIC][RFC PATCH 0/2] Hugetlb Fungibility for page metadata savings and network performance
@ 2026-03-18 23:41 Sourav Panda
  2026-03-18 23:41 ` [LSF/MM/BPF TOPIC][RFC PATCH 1/2] mm: add hugepage shrinker for frozen memory Sourav Panda
  2026-03-18 23:41 ` [LSF/MM/BPF TOPIC][RFC PATCH 2/2] mm/hugetlb: skip hugetlb shrinking for proactive reclaim Sourav Panda
  0 siblings, 2 replies; 3+ messages in thread
From: Sourav Panda @ 2026-03-18 23:41 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel
  Cc: lsf-pc, songmuchun, osalvador, mike.kravetz, mathieu.desnoyers,
	willy, david, pasha.tatashin, rientjes, weixugc, gthelen,
	souravpanda, surenb

The purpose of this RFC is to supplement our discussion in LSF/MM-26.

This is sent as a proof of concept. It applies on top of v7.0-rc3.

In VM environments, the guest frequently utilizes 1GB HugeTLB pages to
reduce TLB misses and minimize page table walk overhead for
network-functions. This has the added benefit that it reduces redundant
struct page metadata overhead through Hugepage Vmemmap Optimization (HVO).

While this saves significant overhead memory, this memory is not available
for other purposes; such as the hosted VM-workload or pagecache.

The guest must balance two competing memory requirements:
HugeTLB Pool: 1GB pages reserved for high-performance NFV applications
and yielding page metadata savings.
Buddy Allocator: 4KB pages required for business logic, system services,
and page cache.

Current kernel limitations prevent fungibility of memory between these two
pools. As a starting point we propose a hugetlb-shrinker to provide one-way
fungibility wherein ~90% guest memory is allocated as 1G hugepages at boot
and then converted to buddy pages on demand.

Sourav Panda (2):
  mm: add hugepage shrinker for frozen memory
  mm/hugetlb: skip hugetlb shrinking for proactive reclaim

 include/linux/shrinker.h |   3 +
 mm/Kconfig               |   8 +++
 mm/hugetlb.c             | 131 +++++++++++++++++++++++++++++++++++++++
 mm/internal.h            |   2 +-
 mm/shrinker.c            |  12 ++--
 mm/vmscan.c              |   6 +-
 6 files changed, 154 insertions(+), 8 deletions(-)

-- 
2.53.0.959.g497ff81fa9-goog



^ permalink raw reply	[flat|nested] 3+ messages in thread

* [LSF/MM/BPF TOPIC][RFC PATCH 1/2] mm: add hugepage shrinker for frozen memory
  2026-03-18 23:41 [LSF/MM/BPF TOPIC][RFC PATCH 0/2] Hugetlb Fungibility for page metadata savings and network performance Sourav Panda
@ 2026-03-18 23:41 ` Sourav Panda
  2026-03-18 23:41 ` [LSF/MM/BPF TOPIC][RFC PATCH 2/2] mm/hugetlb: skip hugetlb shrinking for proactive reclaim Sourav Panda
  1 sibling, 0 replies; 3+ messages in thread
From: Sourav Panda @ 2026-03-18 23:41 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel
  Cc: lsf-pc, songmuchun, osalvador, mike.kravetz, mathieu.desnoyers,
	willy, david, pasha.tatashin, rientjes, weixugc, gthelen,
	souravpanda, surenb

Implement a shrinker for the hugetlbfs subsystem to provide one-way
fungibility, converting unused persistent huge pages back to the
buddy system. One Huge page at a time.

This is designed for virtualization user cases, where
a large pool of huge pages is reserved but kept free, acting as a
"frozen" memory reservoir. When the host experiences memory pressure,
this shrinker thaws the memory by reclaiming huge pages on-demand.

Pass the hugetlb_shrinker_enabled=1 kernel command line param to enable.
Please note the nr_huge_pages will change without user intervention.

Both kswapd and direct reclaim can shrink gigantic hugepages when
the system is under memory pressure.  To safely support concurrent
reclaimers (e.g., kswapd and multiple direct reclaim tasks), a new
mutex `hugepage_shrink_mutex` is introduced.

Signed-off-by: Sourav Panda <souravpanda@google.com>
---
 include/linux/shrinker.h |   2 +
 mm/Kconfig               |   9 +++
 mm/hugetlb.c             | 125 +++++++++++++++++++++++++++++++++++++++
 mm/shrinker.c            |   2 +
 4 files changed, 138 insertions(+)

diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 1a00be90d93a..5374c251ee9e 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -51,6 +51,8 @@ struct shrink_control {
 	 */
 	unsigned long nr_scanned;
 
+	s8 priority;
+
 	/* current memcg being shrunk (for memcg aware shrinkers) */
 	struct mem_cgroup *memcg;
 };
diff --git a/mm/Kconfig b/mm/Kconfig
index ebd8ea353687..a88f370c7485 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -769,6 +769,15 @@ config NOMMU_INITIAL_TRIM_EXCESS
 config ARCH_WANT_GENERAL_HUGETLB
 	bool
 
+config HUGETLB_FROZEN_MEMORY_SHRINKER
+	bool "HugeTLB Frozen Memory Shrinker"
+	depends on HUGETLBFS
+	help
+	  Enables a shrinker for the hugetlb subsystem that allows
+	  unused huge pages to be released back to the buddy
+	  system under memory pressure. One huge page at a time.
+	  Further gated by kernel cmdline hugetlb_shrinker_enabled.
+
 config ARCH_WANTS_THP_SWAP
 	def_bool n
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 327eaa4074d3..d4953ff1dda1 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -27,6 +27,7 @@
 #include <linux/string_helpers.h>
 #include <linux/swap.h>
 #include <linux/leafops.h>
+#include <linux/shrinker.h>
 #include <linux/jhash.h>
 #include <linux/numa.h>
 #include <linux/llist.h>
@@ -4127,6 +4128,129 @@ ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
 	return err ? err : len;
 }
 
+#ifdef CONFIG_HUGETLB_FROZEN_MEMORY_SHRINKER
+
+static bool hugetlb_shrinker_enabled;
+static int __init cmdline_parse_hugetlb_shrinker_enabled(char *p)
+{
+	return kstrtobool(p, &hugetlb_shrinker_enabled);
+}
+early_param("hugetlb_shrinker_enabled", cmdline_parse_hugetlb_shrinker_enabled);
+
+static unsigned long hugepage_shrinker_count(struct shrinker *s,
+					     struct shrink_control *sc)
+{
+	struct hstate *h;
+
+	if (sc->priority >= DEF_PRIORITY - 6)
+		return 0;
+
+	if (!gigantic_page_runtime_supported())
+		return 0;
+
+	for_each_hstate(h) {
+		if (hstate_is_gigantic(h) && h->nr_huge_pages_node[sc->nid] > 0)
+			return SWAP_CLUSTER_MAX;
+	}
+	return 0;
+}
+
+static bool hugepage_shrinker_is_watermark_ok(int nid)
+{
+	int i;
+	pg_data_t *pgdat = NODE_DATA(nid);
+
+	for (i = 0; i < MAX_NR_ZONES; i++) {
+		unsigned long mark;
+		unsigned long free_pages;
+		struct zone *zone = pgdat->node_zones + i;
+
+		if (!managed_zone(zone))
+			continue;
+
+		mark = high_wmark_pages(zone);
+		free_pages = zone_page_state(zone, NR_FREE_PAGES);
+		if (__zone_watermark_ok(zone, MAX_PAGE_ORDER, mark,
+					MAX_NR_ZONES, 0, free_pages))
+			return true;
+	}
+	return false;
+}
+
+static DEFINE_MUTEX(hugepage_shrink_mutex);
+
+static unsigned long hugepage_shrinker_scan(struct shrinker *s,
+					    struct shrink_control *sc)
+{
+	int err;
+	struct hstate *h;
+	unsigned long old_nr;
+	nodemask_t nodes_allowed;
+
+	if (sc->priority >= DEF_PRIORITY - 6)
+		return SHRINK_STOP;
+
+	if (sc->nr_to_scan == 0)
+		return SHRINK_STOP;
+
+	if (!gigantic_page_runtime_supported())
+		return SHRINK_STOP;
+
+	if (hugepage_shrinker_is_watermark_ok(sc->nid))
+		return SHRINK_STOP;
+
+	mutex_lock(&hugepage_shrink_mutex);
+
+	if (hugepage_shrinker_is_watermark_ok(sc->nid))
+		goto unlock;
+
+	init_nodemask_of_node(&nodes_allowed, sc->nid);
+
+	for_each_hstate(h) {
+		if (!hstate_is_gigantic(h))
+			continue;
+
+		old_nr = h->nr_huge_pages_node[sc->nid];
+		if (!old_nr)
+			continue;
+
+		err = set_max_huge_pages(h, old_nr - 1, sc->nid, &nodes_allowed);
+		if (!err)
+			goto unlock;
+	}
+unlock:
+	mutex_unlock(&hugepage_shrink_mutex);
+	return SHRINK_STOP;
+}
+
+static struct shrinker *hugepage_shrinker;
+
+static int __init hugetlb_shrinker_init(void)
+{
+	if (!hugetlb_shrinker_enabled)
+		return 0;
+
+	hugepage_shrinker = shrinker_alloc(0, "hugetlbfs");
+	if (!hugepage_shrinker)
+		return -ENOMEM;
+
+	hugepage_shrinker->count_objects = hugepage_shrinker_count;
+	hugepage_shrinker->scan_objects = hugepage_shrinker_scan;
+	hugepage_shrinker->seeks = 0;
+	hugepage_shrinker->batch = 1;
+
+	pr_info("Registering hugetlbfs shrinker\n");
+	shrinker_register(hugepage_shrinker);
+
+	return 0;
+}
+#else
+static int __init hugetlb_shrinker_init(void)
+{
+	return 0;
+}
+#endif
+
 static int __init hugetlb_init(void)
 {
 	int i;
@@ -4183,6 +4307,7 @@ static int __init hugetlb_init(void)
 	hugetlb_sysfs_init();
 	hugetlb_cgroup_file_init();
 	hugetlb_sysctl_init();
+	hugetlb_shrinker_init();
 
 #ifdef CONFIG_SMP
 	num_fault_mutexes = roundup_pow_of_two(8 * num_possible_cpus());
diff --git a/mm/shrinker.c b/mm/shrinker.c
index 7b61fc0ee78f..8a7a05182465 100644
--- a/mm/shrinker.c
+++ b/mm/shrinker.c
@@ -529,6 +529,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 				.gfp_mask = gfp_mask,
 				.nid = nid,
 				.memcg = memcg,
+				.priority = priority,
 			};
 			struct shrinker *shrinker;
 			int shrinker_id = calc_shrinker_id(index, offset);
@@ -654,6 +655,7 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
 			.gfp_mask = gfp_mask,
 			.nid = nid,
 			.memcg = memcg,
+			.priority = priority,
 		};
 
 		if (!shrinker_try_get(shrinker))
-- 
2.53.0.983.g0bb29b3bc5-goog



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [LSF/MM/BPF TOPIC][RFC PATCH 2/2] mm/hugetlb: skip hugetlb shrinking for proactive reclaim
  2026-03-18 23:41 [LSF/MM/BPF TOPIC][RFC PATCH 0/2] Hugetlb Fungibility for page metadata savings and network performance Sourav Panda
  2026-03-18 23:41 ` [LSF/MM/BPF TOPIC][RFC PATCH 1/2] mm: add hugepage shrinker for frozen memory Sourav Panda
@ 2026-03-18 23:41 ` Sourav Panda
  1 sibling, 0 replies; 3+ messages in thread
From: Sourav Panda @ 2026-03-18 23:41 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel
  Cc: lsf-pc, songmuchun, osalvador, mike.kravetz, mathieu.desnoyers,
	willy, david, pasha.tatashin, rientjes, weixugc, gthelen,
	souravpanda, surenb

Scan control can indicate if we are in the proactive reclaim mode.

Pass that to shrinker control and preclude frozen memory hugetlb
shrinking if set.

Signed-off-by: Sourav Panda <souravpanda@google.com>
---
 include/linux/shrinker.h |  1 +
 mm/hugetlb.c             |  6 ++++++
 mm/internal.h            |  2 +-
 mm/shrinker.c            | 10 ++++++----
 mm/vmscan.c              |  6 +++---
 5 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 5374c251ee9e..973d5fd68803 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -52,6 +52,7 @@ struct shrink_control {
 	unsigned long nr_scanned;
 
 	s8 priority;
+	bool proactive;
 
 	/* current memcg being shrunk (for memcg aware shrinkers) */
 	struct mem_cgroup *memcg;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d4953ff1dda1..a70aed7c8665 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4145,6 +4145,9 @@ static unsigned long hugepage_shrinker_count(struct shrinker *s,
 	if (sc->priority >= DEF_PRIORITY - 6)
 		return 0;
 
+	if (sc->proactive)
+		return 0;
+
 	if (!gigantic_page_runtime_supported())
 		return 0;
 
@@ -4193,6 +4196,9 @@ static unsigned long hugepage_shrinker_scan(struct shrinker *s,
 	if (sc->nr_to_scan == 0)
 		return SHRINK_STOP;
 
+	if (sc->proactive)
+		return SHRINK_STOP;
+
 	if (!gigantic_page_runtime_supported())
 		return SHRINK_STOP;
 
diff --git a/mm/internal.h b/mm/internal.h
index cb0af847d7d9..cccb68d723d4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1660,7 +1660,7 @@ void __meminit __init_page_from_nid(unsigned long pfn, int nid);
 
 /* shrinker related functions */
 unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
-			  int priority);
+			  int priority, bool proactive);
 
 int shmem_add_to_page_cache(struct folio *folio,
 			    struct address_space *mapping,
diff --git a/mm/shrinker.c b/mm/shrinker.c
index 8a7a05182465..21b8f0b9d092 100644
--- a/mm/shrinker.c
+++ b/mm/shrinker.c
@@ -467,7 +467,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 
 #ifdef CONFIG_MEMCG
 static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
-			struct mem_cgroup *memcg, int priority)
+			struct mem_cgroup *memcg, int priority, bool proactive)
 {
 	struct shrinker_info *info;
 	unsigned long ret, freed = 0;
@@ -530,6 +530,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 				.nid = nid,
 				.memcg = memcg,
 				.priority = priority,
+				.proactive = proactive,
 			};
 			struct shrinker *shrinker;
 			int shrinker_id = calc_shrinker_id(index, offset);
@@ -586,7 +587,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 }
 #else /* !CONFIG_MEMCG */
 static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
-			struct mem_cgroup *memcg, int priority)
+			struct mem_cgroup *memcg, int priority, bool proactive)
 {
 	return 0;
 }
@@ -613,7 +614,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
  * Returns the number of reclaimed slab objects.
  */
 unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
-			  int priority)
+			  int priority, bool proactive)
 {
 	unsigned long ret, freed = 0;
 	struct shrinker *shrinker;
@@ -626,7 +627,7 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
 	 * oom.
 	 */
 	if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
-		return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
+		return shrink_slab_memcg(gfp_mask, nid, memcg, priority, proactive);
 
 	/*
 	 * lockless algorithm of global shrink.
@@ -656,6 +657,7 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
 			.nid = nid,
 			.memcg = memcg,
 			.priority = priority,
+			.proactive = proactive,
 		};
 
 		if (!shrinker_try_get(shrinker))
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0fc9373e8251..39151d1edeff 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -432,7 +432,7 @@ static unsigned long drop_slab_node(int nid)
 
 	memcg = mem_cgroup_iter(NULL, NULL, NULL);
 	do {
-		freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
+		freed += shrink_slab(GFP_KERNEL, nid, memcg, 0, false);
 	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
 
 	return freed;
@@ -4925,7 +4925,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
 
 	success = try_to_shrink_lruvec(lruvec, sc);
 
-	shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority);
+	shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority, sc->proactive);
 
 	if (!sc->proactive)
 		vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
@@ -6020,7 +6020,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
 		shrink_lruvec(lruvec, sc);
 
 		shrink_slab(sc->gfp_mask, pgdat->node_id, memcg,
-			    sc->priority);
+			    sc->priority, sc->proactive);
 
 		/* Record the group's reclaim efficiency */
 		if (!sc->proactive)
-- 
2.53.0.983.g0bb29b3bc5-goog



^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-03-18 23:41 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-18 23:41 [LSF/MM/BPF TOPIC][RFC PATCH 0/2] Hugetlb Fungibility for page metadata savings and network performance Sourav Panda
2026-03-18 23:41 ` [LSF/MM/BPF TOPIC][RFC PATCH 1/2] mm: add hugepage shrinker for frozen memory Sourav Panda
2026-03-18 23:41 ` [LSF/MM/BPF TOPIC][RFC PATCH 2/2] mm/hugetlb: skip hugetlb shrinking for proactive reclaim Sourav Panda

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox