linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC v2 0/4] add static huge zero folio support
@ 2025-07-24 14:49 Pankaj Raghav (Samsung)
  2025-07-24 14:49 ` [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-07-24 14:49 UTC (permalink / raw)
  To: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, David Hildenbrand,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

From: Pankaj Raghav <p.raghav@samsung.com>

NOTE: I am resending as an RFC again based on Lorenzo's feedback. The
old series can be found here [1].

There are many places in the kernel where we need to zeroout larger
chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
is limited by PAGE_SIZE.

This concern was raised during the review of adding Large Block Size support
to XFS[2][3].

This is especially annoying in block devices and filesystems where we
attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
bvec support in block layer, it is much more efficient to send out
larger zero pages as a part of a single bvec.

Some examples of places in the kernel where this could be useful:
- blkdev_issue_zero_pages()
- iomap_dio_zero()
- vmalloc.c:zero_iter()
- rxperf_process_call()
- fscrypt_zeroout_range_inline_crypt()
- bch2_checksum_update()
...

Usually huge_zero_folio is allocated on demand, and it will be
deallocated by the shrinker if there are no users of it left. At the moment,
huge_zero_folio infrastructure refcount is tied to the process lifetime
that created it. This might not work for bio layer as the completions
can be async and the process that created the huge_zero_folio might no
longer be alive. And, one of the main point that came during discussion
is to have something bigger than zero page as a drop-in replacement.

Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate
the huge_zero_folio, and it will never drop the reference. This makes
using the huge_zero_folio without having to pass any mm struct and does
not tie the lifetime of the zero folio to anything, making it a drop-in
replacement for ZERO_PAGE.

I have converted blkdev_issue_zero_pages() as an example as a part of
this series. I also noticed close to 4% performance improvement just by
replacing ZERO_PAGE with static huge_zero_folio.

I will send patches to individual subsystems using the huge_zero_folio
once this gets upstreamed.

Looking forward to some feedback.

[1] https://lore.kernel.org/linux-mm/20250707142319.319642-1-kernel@pankajraghav.com/
[2] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/
[3] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/

Changes since v1:
- Fixed all warnings.
- Added a retry feature after a particular time.
- Added Acked-by and Signed-off-by from David.

Changes since last series[1]:
- Instead of allocating a new page through memblock, use the same
  infrastructure as huge_zero_folio but raise the reference and never
  drop it. (David)
- And some minor cleanups based on Lorenzo's feedback.

Pankaj Raghav (4):
  mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker
  mm: add static huge zero folio
  mm: add largest_zero_folio() routine
  block: use largest_zero_folio in __blkdev_issue_zero_pages()

 arch/x86/Kconfig        |  1 +
 block/blk-lib.c         | 15 +++++-----
 include/linux/huge_mm.h | 35 ++++++++++++++++++++++
 mm/Kconfig              | 21 +++++++++++++
 mm/huge_memory.c        | 66 +++++++++++++++++++++++++++++++++--------
 5 files changed, 119 insertions(+), 19 deletions(-)


base-commit: 4c831e7a4b72b7ac16b84e4317dee635b170a663
-- 
2.49.0



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker
  2025-07-24 14:49 [RFC v2 0/4] add static huge zero folio support Pankaj Raghav (Samsung)
@ 2025-07-24 14:49 ` Pankaj Raghav (Samsung)
  2025-07-25  2:52   ` Zi Yan
                     ` (2 more replies)
  2025-07-24 14:49 ` [RFC v2 2/4] mm: add static huge zero folio Pankaj Raghav (Samsung)
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-07-24 14:49 UTC (permalink / raw)
  To: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, David Hildenbrand,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

From: Pankaj Raghav <p.raghav@samsung.com>

As we already moved from exposing huge_zero_page to huge_zero_folio,
change the name of the shrinker to reflect that.

No functional changes.

Suggested-by: David Hildenbrand <david@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 mm/huge_memory.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2b4ea5a2ce7d..5d8365d1d3e9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -266,15 +266,15 @@ void mm_put_huge_zero_folio(struct mm_struct *mm)
 		put_huge_zero_page();
 }
 
-static unsigned long shrink_huge_zero_page_count(struct shrinker *shrink,
-					struct shrink_control *sc)
+static unsigned long shrink_huge_zero_folio_count(struct shrinker *shrink,
+						  struct shrink_control *sc)
 {
 	/* we can free zero page only if last reference remains */
 	return atomic_read(&huge_zero_refcount) == 1 ? HPAGE_PMD_NR : 0;
 }
 
-static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
-				       struct shrink_control *sc)
+static unsigned long shrink_huge_zero_folio_scan(struct shrinker *shrink,
+						 struct shrink_control *sc)
 {
 	if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
 		struct folio *zero_folio = xchg(&huge_zero_folio, NULL);
@@ -287,7 +287,7 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
 	return 0;
 }
 
-static struct shrinker *huge_zero_page_shrinker;
+static struct shrinker *huge_zero_folio_shrinker;
 
 #ifdef CONFIG_SYSFS
 static ssize_t enabled_show(struct kobject *kobj,
@@ -849,8 +849,8 @@ static inline void hugepage_exit_sysfs(struct kobject *hugepage_kobj)
 
 static int __init thp_shrinker_init(void)
 {
-	huge_zero_page_shrinker = shrinker_alloc(0, "thp-zero");
-	if (!huge_zero_page_shrinker)
+	huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
+	if (!huge_zero_folio_shrinker)
 		return -ENOMEM;
 
 	deferred_split_shrinker = shrinker_alloc(SHRINKER_NUMA_AWARE |
@@ -858,13 +858,13 @@ static int __init thp_shrinker_init(void)
 						 SHRINKER_NONSLAB,
 						 "thp-deferred_split");
 	if (!deferred_split_shrinker) {
-		shrinker_free(huge_zero_page_shrinker);
+		shrinker_free(huge_zero_folio_shrinker);
 		return -ENOMEM;
 	}
 
-	huge_zero_page_shrinker->count_objects = shrink_huge_zero_page_count;
-	huge_zero_page_shrinker->scan_objects = shrink_huge_zero_page_scan;
-	shrinker_register(huge_zero_page_shrinker);
+	huge_zero_folio_shrinker->count_objects = shrink_huge_zero_folio_count;
+	huge_zero_folio_shrinker->scan_objects = shrink_huge_zero_folio_scan;
+	shrinker_register(huge_zero_folio_shrinker);
 
 	deferred_split_shrinker->count_objects = deferred_split_count;
 	deferred_split_shrinker->scan_objects = deferred_split_scan;
@@ -875,7 +875,7 @@ static int __init thp_shrinker_init(void)
 
 static void __init thp_shrinker_exit(void)
 {
-	shrinker_free(huge_zero_page_shrinker);
+	shrinker_free(huge_zero_folio_shrinker);
 	shrinker_free(deferred_split_shrinker);
 }
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC v2 2/4] mm: add static huge zero folio
  2025-07-24 14:49 [RFC v2 0/4] add static huge zero folio support Pankaj Raghav (Samsung)
  2025-07-24 14:49 ` [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
@ 2025-07-24 14:49 ` Pankaj Raghav (Samsung)
  2025-08-01  4:23   ` Ritesh Harjani
  2025-08-01 15:49   ` David Hildenbrand
  2025-07-24 14:50 ` [RFC v2 3/4] mm: add largest_zero_folio() routine Pankaj Raghav (Samsung)
  2025-07-24 14:50 ` [RFC v2 4/4] block: use largest_zero_folio in __blkdev_issue_zero_pages() Pankaj Raghav (Samsung)
  3 siblings, 2 replies; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-07-24 14:49 UTC (permalink / raw)
  To: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, David Hildenbrand,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

From: Pankaj Raghav <p.raghav@samsung.com>

There are many places in the kernel where we need to zeroout larger
chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
is limited by PAGE_SIZE.

This is especially annoying in block devices and filesystems where we
attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
bvec support in block layer, it is much more efficient to send out
larger zero pages as a part of single bvec.

This concern was raised during the review of adding LBS support to
XFS[1][2].

Usually huge_zero_folio is allocated on demand, and it will be
deallocated by the shrinker if there are no users of it left. At moment,
huge_zero_folio infrastructure refcount is tied to the process lifetime
that created it. This might not work for bio layer as the completions
can be async and the process that created the huge_zero_folio might no
longer be alive. And, one of the main point that came during discussion
is to have something bigger than zero page as a drop-in replacement.

Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate
the huge_zero_folio, and it will never drop the reference. This makes
using the huge_zero_folio without having to pass any mm struct and does
not tie the lifetime of the zero folio to anything, making it a drop-in
replacement for ZERO_PAGE.

If STATIC_HUGE_ZERO_FOLIO config option is enabled, then
mm_get_huge_zero_folio() will simply return this page instead of
dynamically allocating a new PMD page.

This option can waste memory in small systems or systems with 64k base
page size. So make it an opt-in and also add an option from individual
architecture so that we don't enable this feature for larger base page
size systems.

[1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/
[2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/

Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 arch/x86/Kconfig        |  1 +
 include/linux/huge_mm.h | 18 ++++++++++++++++++
 mm/Kconfig              | 21 +++++++++++++++++++++
 mm/huge_memory.c        | 42 +++++++++++++++++++++++++++++++++++++++++
 4 files changed, 82 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0ce86e14ab5e..8e2aa1887309 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -153,6 +153,7 @@ config X86
 	select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP	if X86_64
 	select ARCH_WANT_HUGETLB_VMEMMAP_PREINIT if X86_64
 	select ARCH_WANTS_THP_SWAP		if X86_64
+	select ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO if X86_64
 	select ARCH_HAS_PARANOID_L1D_FLUSH
 	select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
 	select BUILDTIME_TABLE_SORT
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 7748489fde1b..78ebceb61d0e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -476,6 +476,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
 extern struct folio *huge_zero_folio;
 extern unsigned long huge_zero_pfn;
+extern atomic_t huge_zero_folio_is_static;
 
 static inline bool is_huge_zero_folio(const struct folio *folio)
 {
@@ -494,6 +495,18 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
 
 struct folio *mm_get_huge_zero_folio(struct mm_struct *mm);
 void mm_put_huge_zero_folio(struct mm_struct *mm);
+struct folio *__get_static_huge_zero_folio(void);
+
+static inline struct folio *get_static_huge_zero_folio(void)
+{
+	if (!IS_ENABLED(CONFIG_STATIC_HUGE_ZERO_FOLIO))
+		return NULL;
+
+	if (likely(atomic_read(&huge_zero_folio_is_static)))
+		return huge_zero_folio;
+
+	return __get_static_huge_zero_folio();
+}
 
 static inline bool thp_migration_supported(void)
 {
@@ -685,6 +698,11 @@ static inline int change_huge_pud(struct mmu_gather *tlb,
 {
 	return 0;
 }
+
+static inline struct folio *get_static_huge_zero_folio(void)
+{
+	return NULL;
+}
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 static inline int split_folio_to_list_to_order(struct folio *folio,
diff --git a/mm/Kconfig b/mm/Kconfig
index 0287e8d94aea..e2132fcf2ccb 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -835,6 +835,27 @@ config ARCH_WANT_GENERAL_HUGETLB
 config ARCH_WANTS_THP_SWAP
 	def_bool n
 
+config ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO
+	def_bool n
+
+config STATIC_HUGE_ZERO_FOLIO
+	bool "Allocate a PMD sized folio for zeroing"
+	depends on ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO && TRANSPARENT_HUGEPAGE
+	help
+	  Without this config enabled, the huge zero folio is allocated on
+	  demand and freed under memory pressure once no longer in use.
+	  To detect remaining users reliably, references to the huge zero folio
+	  must be tracked precisely, so it is commonly only available for mapping
+	  it into user page tables.
+
+	  With this config enabled, the huge zero folio can also be used
+	  for other purposes that do not implement precise reference counting:
+	  it is still allocated on demand, but never freed, allowing for more
+	  wide-spread use, for example, when performing I/O similar to the
+	  traditional shared zeropage.
+
+	  Not suitable for memory constrained systems.
+
 config MM_ID
 	def_bool n
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5d8365d1d3e9..c160c37f4d31 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -75,6 +75,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
 static bool split_underused_thp = true;
 
 static atomic_t huge_zero_refcount;
+atomic_t huge_zero_folio_is_static __read_mostly;
 struct folio *huge_zero_folio __read_mostly;
 unsigned long huge_zero_pfn __read_mostly = ~0UL;
 unsigned long huge_anon_orders_always __read_mostly;
@@ -266,6 +267,47 @@ void mm_put_huge_zero_folio(struct mm_struct *mm)
 		put_huge_zero_page();
 }
 
+#ifdef CONFIG_STATIC_HUGE_ZERO_FOLIO
+#define FAIL_COUNT_LIMIT 2
+
+struct folio *__get_static_huge_zero_folio(void)
+{
+	static unsigned long fail_count_clear_timer;
+	static atomic_t huge_zero_static_fail_count __read_mostly;
+
+	if (unlikely(!slab_is_available()))
+		return NULL;
+
+	/*
+	 * If we failed to allocate a huge zero folio multiple times,
+	 * just refrain from trying for one minute before retrying to get
+	 * a reference again.
+	 */
+	if (atomic_read(&huge_zero_static_fail_count) > FAIL_COUNT_LIMIT) {
+		if (time_before(jiffies, fail_count_clear_timer))
+			return NULL;
+		atomic_set(&huge_zero_static_fail_count, 0);
+	}
+	/*
+	 * Our raised reference will prevent the shrinker from ever having
+	 * success.
+	 */
+	if (!get_huge_zero_page()) {
+		int count = atomic_inc_return(&huge_zero_static_fail_count);
+
+		if (count > FAIL_COUNT_LIMIT)
+			fail_count_clear_timer = get_jiffies_64() + 60 * HZ;
+
+		return NULL;
+	}
+
+	if (atomic_cmpxchg(&huge_zero_folio_is_static, 0, 1) != 0)
+		put_huge_zero_page();
+
+	return huge_zero_folio;
+}
+#endif /* CONFIG_STATIC_HUGE_ZERO_FOLIO */
+
 static unsigned long shrink_huge_zero_folio_count(struct shrinker *shrink,
 						  struct shrink_control *sc)
 {
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC v2 3/4]  mm: add largest_zero_folio() routine
  2025-07-24 14:49 [RFC v2 0/4] add static huge zero folio support Pankaj Raghav (Samsung)
  2025-07-24 14:49 ` [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
  2025-07-24 14:49 ` [RFC v2 2/4] mm: add static huge zero folio Pankaj Raghav (Samsung)
@ 2025-07-24 14:50 ` Pankaj Raghav (Samsung)
  2025-08-01  4:30   ` Ritesh Harjani
  2025-07-24 14:50 ` [RFC v2 4/4] block: use largest_zero_folio in __blkdev_issue_zero_pages() Pankaj Raghav (Samsung)
  3 siblings, 1 reply; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-07-24 14:50 UTC (permalink / raw)
  To: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, David Hildenbrand,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

From: Pankaj Raghav <p.raghav@samsung.com>

Add largest_zero_folio() routine so that huge_zero_folio can be
used directly when CONFIG_STATIC_HUGE_ZERO_FOLIO is enabled. This will
return ZERO_PAGE folio if CONFIG_STATIC_HUGE_ZERO_FOLIO is disabled or
if we failed to allocate a huge_zero_folio.

Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 include/linux/huge_mm.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 78ebceb61d0e..c44a6736704b 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -716,4 +716,21 @@ static inline int split_folio_to_order(struct folio *folio, int new_order)
 	return split_folio_to_list_to_order(folio, NULL, new_order);
 }
 
+/*
+ * largest_zero_folio - Get the largest zero size folio available
+ *
+ * This function will return huge_zero_folio if CONFIG_STATIC_HUGE_ZERO_FOLIO
+ * is enabled. Otherwise, a ZERO_PAGE folio is returned.
+ *
+ * Deduce the size of the folio with folio_size instead of assuming the
+ * folio size.
+ */
+static inline struct folio *largest_zero_folio(void)
+{
+	struct folio *folio = get_static_huge_zero_folio();
+
+	if (folio)
+		return folio;
+	return page_folio(ZERO_PAGE(0));
+}
 #endif /* _LINUX_HUGE_MM_H */
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC v2 4/4] block: use largest_zero_folio in __blkdev_issue_zero_pages()
  2025-07-24 14:49 [RFC v2 0/4] add static huge zero folio support Pankaj Raghav (Samsung)
                   ` (2 preceding siblings ...)
  2025-07-24 14:50 ` [RFC v2 3/4] mm: add largest_zero_folio() routine Pankaj Raghav (Samsung)
@ 2025-07-24 14:50 ` Pankaj Raghav (Samsung)
  3 siblings, 0 replies; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-07-24 14:50 UTC (permalink / raw)
  To: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, David Hildenbrand,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

From: Pankaj Raghav <p.raghav@samsung.com>

Use largest_zero_folio() in __blkdev_issue_zero_pages().
On systems with CONFIG_STATIC_HUGE_ZERO_FOLIO enabled, we will end up
sending larger bvecs instead of multiple small ones.

Noticed a 4% increase in performance on a commercial NVMe SSD which does
not support OP_WRITE_ZEROES. The device's MDTS was 128K. The performance
gains might be bigger if the device supports bigger MDTS.

Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 block/blk-lib.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/block/blk-lib.c b/block/blk-lib.c
index 4c9f20a689f7..3030a772d3aa 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -196,6 +196,8 @@ static void __blkdev_issue_zero_pages(struct block_device *bdev,
 		sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
 		struct bio **biop, unsigned int flags)
 {
+	struct folio *zero_folio = largest_zero_folio();
+
 	while (nr_sects) {
 		unsigned int nr_vecs = __blkdev_sectors_to_bio_pages(nr_sects);
 		struct bio *bio;
@@ -208,15 +210,14 @@ static void __blkdev_issue_zero_pages(struct block_device *bdev,
 			break;
 
 		do {
-			unsigned int len, added;
+			unsigned int len;
 
-			len = min_t(sector_t,
-				PAGE_SIZE, nr_sects << SECTOR_SHIFT);
-			added = bio_add_page(bio, ZERO_PAGE(0), len, 0);
-			if (added < len)
+			len = min_t(sector_t, folio_size(zero_folio),
+				    nr_sects << SECTOR_SHIFT);
+			if (!bio_add_folio(bio, zero_folio, len, 0))
 				break;
-			nr_sects -= added >> SECTOR_SHIFT;
-			sector += added >> SECTOR_SHIFT;
+			nr_sects -= len >> SECTOR_SHIFT;
+			sector += len >> SECTOR_SHIFT;
 		} while (nr_sects);
 
 		*biop = bio_chain_and_submit(*biop, bio);
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker
  2025-07-24 14:49 ` [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
@ 2025-07-25  2:52   ` Zi Yan
  2025-08-01  4:18   ` Ritesh Harjani
  2025-08-01 15:53   ` Lorenzo Stoakes
  2 siblings, 0 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-25  2:52 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung)
  Cc: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Mike Rapoport,
	Dave Hansen, Michal Hocko, David Hildenbrand, Lorenzo Stoakes,
	Andrew Morton, Thomas Gleixner, Nico Pache, Dev Jain,
	Liam R . Howlett, Jens Axboe, linux-kernel, willy, linux-mm, x86,
	linux-block, linux-fsdevel, Darrick J . Wong, mcgrof, gost.dev,
	hch, Pankaj Raghav

On 24 Jul 2025, at 10:49, Pankaj Raghav (Samsung) wrote:

> From: Pankaj Raghav <p.raghav@samsung.com>
>
> As we already moved from exposing huge_zero_page to huge_zero_folio,
> change the name of the shrinker to reflect that.
>
> No functional changes.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
>  mm/huge_memory.c | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
>
LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker
  2025-07-24 14:49 ` [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
  2025-07-25  2:52   ` Zi Yan
@ 2025-08-01  4:18   ` Ritesh Harjani
  2025-08-01 15:30     ` David Hildenbrand
  2025-08-01 15:53   ` Lorenzo Stoakes
  2 siblings, 1 reply; 16+ messages in thread
From: Ritesh Harjani @ 2025-08-01  4:18 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung), Suren Baghdasaryan, Ryan Roberts,
	Baolin Wang, Borislav Petkov, Ingo Molnar, H . Peter Anvin,
	Vlastimil Babka, Zi Yan, Mike Rapoport, Dave Hansen, Michal Hocko,
	David Hildenbrand, Lorenzo Stoakes, Andrew Morton,
	Thomas Gleixner, Nico Pache, Dev Jain, Liam R . Howlett,
	Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

"Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes:

> From: Pankaj Raghav <p.raghav@samsung.com>
>
> As we already moved from exposing huge_zero_page to huge_zero_folio,
> change the name of the shrinker to reflect that.
>

Why not change get_huge_zero_page() to get_huge_zero_folio() too, for
consistent naming?

> No functional changes.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
>  mm/huge_memory.c | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)

-ritesh


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 2/4] mm: add static huge zero folio
  2025-07-24 14:49 ` [RFC v2 2/4] mm: add static huge zero folio Pankaj Raghav (Samsung)
@ 2025-08-01  4:23   ` Ritesh Harjani
  2025-08-04  8:41     ` Pankaj Raghav (Samsung)
  2025-08-01 15:49   ` David Hildenbrand
  1 sibling, 1 reply; 16+ messages in thread
From: Ritesh Harjani @ 2025-08-01  4:23 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung), Suren Baghdasaryan, Ryan Roberts,
	Baolin Wang, Borislav Petkov, Ingo Molnar, H . Peter Anvin,
	Vlastimil Babka, Zi Yan, Mike Rapoport, Dave Hansen, Michal Hocko,
	David Hildenbrand, Lorenzo Stoakes, Andrew Morton,
	Thomas Gleixner, Nico Pache, Dev Jain, Liam R . Howlett,
	Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

"Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes:

> From: Pankaj Raghav <p.raghav@samsung.com>
>
> There are many places in the kernel where we need to zeroout larger
> chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
> is limited by PAGE_SIZE.
>
> This is especially annoying in block devices and filesystems where we
> attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
> bvec support in block layer, it is much more efficient to send out
> larger zero pages as a part of single bvec.
>
> This concern was raised during the review of adding LBS support to
> XFS[1][2].
>
> Usually huge_zero_folio is allocated on demand, and it will be
> deallocated by the shrinker if there are no users of it left. At moment,
> huge_zero_folio infrastructure refcount is tied to the process lifetime
> that created it. This might not work for bio layer as the completions
> can be async and the process that created the huge_zero_folio might no
> longer be alive. And, one of the main point that came during discussion
> is to have something bigger than zero page as a drop-in replacement.
>
> Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate
> the huge_zero_folio, and it will never drop the reference. This makes
> using the huge_zero_folio without having to pass any mm struct and does
> not tie the lifetime of the zero folio to anything, making it a drop-in
> replacement for ZERO_PAGE.
>
> If STATIC_HUGE_ZERO_FOLIO config option is enabled, then
> mm_get_huge_zero_folio() will simply return this page instead of
> dynamically allocating a new PMD page.
>
> This option can waste memory in small systems or systems with 64k base
> page size. So make it an opt-in and also add an option from individual
> architecture so that we don't enable this feature for larger base page
> size systems.

Can you please help me understand why will there be memory waste with
64k base pagesize, if this feature gets enabled?

Is it because systems with 64k base pagsize can have a much larger PMD
size then 2M and hence this static huge folio won't really get used?

Just want to understand this better. On Power with Radix MMU, PMD size
is still 2M, but with Hash it can be 16M.
So I was considering if we should enable this with Radix. Hence the ask
to better understand this.

-ritesh


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 3/4]  mm: add largest_zero_folio() routine
  2025-07-24 14:50 ` [RFC v2 3/4] mm: add largest_zero_folio() routine Pankaj Raghav (Samsung)
@ 2025-08-01  4:30   ` Ritesh Harjani
  2025-08-01 15:33     ` David Hildenbrand
  0 siblings, 1 reply; 16+ messages in thread
From: Ritesh Harjani @ 2025-08-01  4:30 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung), Suren Baghdasaryan, Ryan Roberts,
	Baolin Wang, Borislav Petkov, Ingo Molnar, H . Peter Anvin,
	Vlastimil Babka, Zi Yan, Mike Rapoport, Dave Hansen, Michal Hocko,
	David Hildenbrand, Lorenzo Stoakes, Andrew Morton,
	Thomas Gleixner, Nico Pache, Dev Jain, Liam R . Howlett,
	Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, kernel, hch, Pankaj Raghav

"Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes:

> From: Pankaj Raghav <p.raghav@samsung.com>
>
> Add largest_zero_folio() routine so that huge_zero_folio can be

[largest]_zero_folio() can sound a bit confusing with largest in it's
name. Maybe optimal_zero_folio()? 

No hard opinion though. Will leave it upto you. 

-ritesh

> used directly when CONFIG_STATIC_HUGE_ZERO_FOLIO is enabled. This will
> return ZERO_PAGE folio if CONFIG_STATIC_HUGE_ZERO_FOLIO is disabled or
> if we failed to allocate a huge_zero_folio.
>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
>  include/linux/huge_mm.h | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 78ebceb61d0e..c44a6736704b 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -716,4 +716,21 @@ static inline int split_folio_to_order(struct folio *folio, int new_order)
>  	return split_folio_to_list_to_order(folio, NULL, new_order);
>  }
>  
> +/*
> + * largest_zero_folio - Get the largest zero size folio available
> + *
> + * This function will return huge_zero_folio if CONFIG_STATIC_HUGE_ZERO_FOLIO
> + * is enabled. Otherwise, a ZERO_PAGE folio is returned.
> + *
> + * Deduce the size of the folio with folio_size instead of assuming the
> + * folio size.
> + */
> +static inline struct folio *largest_zero_folio(void)
> +{
> +	struct folio *folio = get_static_huge_zero_folio();
> +
> +	if (folio)
> +		return folio;
> +	return page_folio(ZERO_PAGE(0));
> +}
>  #endif /* _LINUX_HUGE_MM_H */
> -- 
> 2.49.0


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker
  2025-08-01  4:18   ` Ritesh Harjani
@ 2025-08-01 15:30     ` David Hildenbrand
  2025-08-04  8:36       ` Pankaj Raghav (Samsung)
  0 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2025-08-01 15:30 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), Pankaj Raghav (Samsung), Suren Baghdasaryan,
	Ryan Roberts, Baolin Wang, Borislav Petkov, Ingo Molnar,
	H . Peter Anvin, Vlastimil Babka, Zi Yan, Mike Rapoport,
	Dave Hansen, Michal Hocko, Lorenzo Stoakes, Andrew Morton,
	Thomas Gleixner, Nico Pache, Dev Jain, Liam R . Howlett,
	Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, hch, Pankaj Raghav

On 01.08.25 06:18, Ritesh Harjani (IBM) wrote:
> "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes:
> 
>> From: Pankaj Raghav <p.raghav@samsung.com>
>>
>> As we already moved from exposing huge_zero_page to huge_zero_folio,
>> change the name of the shrinker to reflect that.
>>
> 
> Why not change get_huge_zero_page() to get_huge_zero_folio() too, for
> consistent naming?

Then we should also rename put_huge_zero_folio(). Renaming 
MMF_HUGE_ZERO_PAGE should probably be done separately.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 3/4] mm: add largest_zero_folio() routine
  2025-08-01  4:30   ` Ritesh Harjani
@ 2025-08-01 15:33     ` David Hildenbrand
  0 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2025-08-01 15:33 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), Pankaj Raghav (Samsung), Suren Baghdasaryan,
	Ryan Roberts, Baolin Wang, Borislav Petkov, Ingo Molnar,
	H . Peter Anvin, Vlastimil Babka, Zi Yan, Mike Rapoport,
	Dave Hansen, Michal Hocko, Lorenzo Stoakes, Andrew Morton,
	Thomas Gleixner, Nico Pache, Dev Jain, Liam R . Howlett,
	Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, hch, Pankaj Raghav

On 01.08.25 06:30, Ritesh Harjani (IBM) wrote:
> "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes:
> 
>> From: Pankaj Raghav <p.raghav@samsung.com>
>>
>> Add largest_zero_folio() routine so that huge_zero_folio can be
> 
> [largest]_zero_folio() can sound a bit confusing with largest in it's
> name. Maybe optimal_zero_folio()?

I prefer largest, it clearly documents what you get.

huge vs large is a different discussion that goes back to "huge pages -> 
huge zero page".

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 2/4] mm: add static huge zero folio
  2025-07-24 14:49 ` [RFC v2 2/4] mm: add static huge zero folio Pankaj Raghav (Samsung)
  2025-08-01  4:23   ` Ritesh Harjani
@ 2025-08-01 15:49   ` David Hildenbrand
  2025-08-04 10:41     ` Pankaj Raghav (Samsung)
  1 sibling, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2025-08-01 15:49 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung), Suren Baghdasaryan, Ryan Roberts,
	Baolin Wang, Borislav Petkov, Ingo Molnar, H . Peter Anvin,
	Vlastimil Babka, Zi Yan, Mike Rapoport, Dave Hansen, Michal Hocko,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe
  Cc: linux-kernel, willy, linux-mm, x86, linux-block, linux-fsdevel,
	Darrick J . Wong, mcgrof, gost.dev, hch, Pankaj Raghav

On 24.07.25 16:49, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
> 
> There are many places in the kernel where we need to zeroout larger
> chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
> is limited by PAGE_SIZE.
> 
> This is especially annoying in block devices and filesystems where we
> attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
> bvec support in block layer, it is much more efficient to send out
> larger zero pages as a part of single bvec.
> 
> This concern was raised during the review of adding LBS support to
> XFS[1][2].
> 
> Usually huge_zero_folio is allocated on demand, and it will be
> deallocated by the shrinker if there are no users of it left. At moment,
> huge_zero_folio infrastructure refcount is tied to the process lifetime
> that created it. This might not work for bio layer as the completions
> can be async and the process that created the huge_zero_folio might no
> longer be alive. And, one of the main point that came during discussion
> is to have something bigger than zero page as a drop-in replacement.
> 
> Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate

"... will result in allocating the huge zero folio on first request, if not already allocated, and turn it static such that it can never get freed."

> the huge_zero_folio, and it will never drop the reference. This makes
> using the huge_zero_folio without having to pass any mm struct and does
> not tie the lifetime of the zero folio to anything, making it a drop-in
> replacement for ZERO_PAGE.
> 
> If STATIC_HUGE_ZERO_FOLIO config option is enabled, then
> mm_get_huge_zero_folio() will simply return this page instead of
> dynamically allocating a new PMD page.
> 
> This option can waste memory in small systems or systems with 64k base
> page size. So make it an opt-in and also add an option from individual
> architecture so that we don't enable this feature for larger base page
> size systems.
> > [1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/
> [2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/
> 
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
>   arch/x86/Kconfig        |  1 +
>   include/linux/huge_mm.h | 18 ++++++++++++++++++
>   mm/Kconfig              | 21 +++++++++++++++++++++
>   mm/huge_memory.c        | 42 +++++++++++++++++++++++++++++++++++++++++
>   4 files changed, 82 insertions(+)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 0ce86e14ab5e..8e2aa1887309 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -153,6 +153,7 @@ config X86
>   	select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP	if X86_64
>   	select ARCH_WANT_HUGETLB_VMEMMAP_PREINIT if X86_64
>   	select ARCH_WANTS_THP_SWAP		if X86_64
> +	select ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO if X86_64
>   	select ARCH_HAS_PARANOID_L1D_FLUSH
>   	select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
>   	select BUILDTIME_TABLE_SORT
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 7748489fde1b..78ebceb61d0e 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -476,6 +476,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
>   
>   extern struct folio *huge_zero_folio;
>   extern unsigned long huge_zero_pfn;
> +extern atomic_t huge_zero_folio_is_static;
>   
>   static inline bool is_huge_zero_folio(const struct folio *folio)
>   {
> @@ -494,6 +495,18 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
>   
>   struct folio *mm_get_huge_zero_folio(struct mm_struct *mm);
>   void mm_put_huge_zero_folio(struct mm_struct *mm);
> +struct folio *__get_static_huge_zero_folio(void);
> +
> +static inline struct folio *get_static_huge_zero_folio(void)
> +{
> +	if (!IS_ENABLED(CONFIG_STATIC_HUGE_ZERO_FOLIO))
> +		return NULL;
> +
> +	if (likely(atomic_read(&huge_zero_folio_is_static)))
> +		return huge_zero_folio;
> +
> +	return __get_static_huge_zero_folio();
> +}
>   
>   static inline bool thp_migration_supported(void)
>   {
> @@ -685,6 +698,11 @@ static inline int change_huge_pud(struct mmu_gather *tlb,
>   {
>   	return 0;
>   }
> +
> +static inline struct folio *get_static_huge_zero_folio(void)
> +{
> +	return NULL;
> +}
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   
>   static inline int split_folio_to_list_to_order(struct folio *folio,
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 0287e8d94aea..e2132fcf2ccb 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -835,6 +835,27 @@ config ARCH_WANT_GENERAL_HUGETLB
>   config ARCH_WANTS_THP_SWAP
>   	def_bool n
>   
> +config ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO
> +	def_bool n
> +
> +config STATIC_HUGE_ZERO_FOLIO
> +	bool "Allocate a PMD sized folio for zeroing"
> +	depends on ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO && TRANSPARENT_HUGEPAGE
> +	help
> +	  Without this config enabled, the huge zero folio is allocated on
> +	  demand and freed under memory pressure once no longer in use.
> +	  To detect remaining users reliably, references to the huge zero folio
> +	  must be tracked precisely, so it is commonly only available for mapping
> +	  it into user page tables.
> +
> +	  With this config enabled, the huge zero folio can also be used
> +	  for other purposes that do not implement precise reference counting:
> +	  it is still allocated on demand, but never freed, allowing for more
> +	  wide-spread use, for example, when performing I/O similar to the
> +	  traditional shared zeropage.
> +
> +	  Not suitable for memory constrained systems.
> +
>   config MM_ID
>   	def_bool n
>   
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5d8365d1d3e9..c160c37f4d31 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -75,6 +75,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
>   static bool split_underused_thp = true;
>   
>   static atomic_t huge_zero_refcount;
> +atomic_t huge_zero_folio_is_static __read_mostly;
>   struct folio *huge_zero_folio __read_mostly;
>   unsigned long huge_zero_pfn __read_mostly = ~0UL;
>   unsigned long huge_anon_orders_always __read_mostly;
> @@ -266,6 +267,47 @@ void mm_put_huge_zero_folio(struct mm_struct *mm)
>   		put_huge_zero_page();
>   }
>   
> +#ifdef CONFIG_STATIC_HUGE_ZERO_FOLIO
> +#define FAIL_COUNT_LIMIT 2
> +
> +struct folio *__get_static_huge_zero_folio(void)
> +{
> +	static unsigned long fail_count_clear_timer;
> +	static atomic_t huge_zero_static_fail_count __read_mostly;
> +
> +	if (unlikely(!slab_is_available()))
> +		return NULL;
> +
> +	/*
> +	 * If we failed to allocate a huge zero folio multiple times,
> +	 * just refrain from trying for one minute before retrying to get
> +	 * a reference again.
> +	 */

Is this "try twice" really worth it? Just try once, and if it fails, try only again in the future.

I guess we'll learn how that will behave in practice, and how we'll have to fine-tune it :)


In shrink_huge_zero_page_scan(), should we probably warn if something buggy happens?

Something like

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2b4ea5a2ce7d2..b1109f8699a24 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -277,7 +277,11 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
                                        struct shrink_control *sc)
  {
         if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
-               struct folio *zero_folio = xchg(&huge_zero_folio, NULL);
+               struct folio *zero_folio;
+
+               if (WARN_ON_ONCE(atomic_read(&huge_zero_folio_is_static)))
+                       return 0;
+               zero_folio = xchg(&huge_zero_folio, NULL);
                 BUG_ON(zero_folio == NULL);
                 WRITE_ONCE(huge_zero_pfn, ~0UL);
                 folio_put(zero_folio);


-- 
Cheers,

David / dhildenb



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker
  2025-07-24 14:49 ` [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
  2025-07-25  2:52   ` Zi Yan
  2025-08-01  4:18   ` Ritesh Harjani
@ 2025-08-01 15:53   ` Lorenzo Stoakes
  2 siblings, 0 replies; 16+ messages in thread
From: Lorenzo Stoakes @ 2025-08-01 15:53 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung)
  Cc: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, David Hildenbrand,
	Andrew Morton, Thomas Gleixner, Nico Pache, Dev Jain,
	Liam R . Howlett, Jens Axboe, linux-kernel, willy, linux-mm, x86,
	linux-block, linux-fsdevel, Darrick J . Wong, mcgrof, gost.dev,
	hch, Pankaj Raghav

On Thu, Jul 24, 2025 at 04:49:58PM +0200, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
>
> As we already moved from exposing huge_zero_page to huge_zero_folio,
> change the name of the shrinker to reflect that.
>
> No functional changes.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>

Makes sense to rename other related stuff as pointed out by Ritesh and
David, but for this part:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  mm/huge_memory.c | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2b4ea5a2ce7d..5d8365d1d3e9 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -266,15 +266,15 @@ void mm_put_huge_zero_folio(struct mm_struct *mm)
>  		put_huge_zero_page();
>  }
>
> -static unsigned long shrink_huge_zero_page_count(struct shrinker *shrink,
> -					struct shrink_control *sc)
> +static unsigned long shrink_huge_zero_folio_count(struct shrinker *shrink,
> +						  struct shrink_control *sc)
>  {
>  	/* we can free zero page only if last reference remains */
>  	return atomic_read(&huge_zero_refcount) == 1 ? HPAGE_PMD_NR : 0;
>  }
>
> -static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
> -				       struct shrink_control *sc)
> +static unsigned long shrink_huge_zero_folio_scan(struct shrinker *shrink,
> +						 struct shrink_control *sc)
>  {
>  	if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
>  		struct folio *zero_folio = xchg(&huge_zero_folio, NULL);
> @@ -287,7 +287,7 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
>  	return 0;
>  }
>
> -static struct shrinker *huge_zero_page_shrinker;
> +static struct shrinker *huge_zero_folio_shrinker;
>
>  #ifdef CONFIG_SYSFS
>  static ssize_t enabled_show(struct kobject *kobj,
> @@ -849,8 +849,8 @@ static inline void hugepage_exit_sysfs(struct kobject *hugepage_kobj)
>
>  static int __init thp_shrinker_init(void)
>  {
> -	huge_zero_page_shrinker = shrinker_alloc(0, "thp-zero");
> -	if (!huge_zero_page_shrinker)
> +	huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
> +	if (!huge_zero_folio_shrinker)
>  		return -ENOMEM;
>
>  	deferred_split_shrinker = shrinker_alloc(SHRINKER_NUMA_AWARE |
> @@ -858,13 +858,13 @@ static int __init thp_shrinker_init(void)
>  						 SHRINKER_NONSLAB,
>  						 "thp-deferred_split");
>  	if (!deferred_split_shrinker) {
> -		shrinker_free(huge_zero_page_shrinker);
> +		shrinker_free(huge_zero_folio_shrinker);
>  		return -ENOMEM;
>  	}
>
> -	huge_zero_page_shrinker->count_objects = shrink_huge_zero_page_count;
> -	huge_zero_page_shrinker->scan_objects = shrink_huge_zero_page_scan;
> -	shrinker_register(huge_zero_page_shrinker);
> +	huge_zero_folio_shrinker->count_objects = shrink_huge_zero_folio_count;
> +	huge_zero_folio_shrinker->scan_objects = shrink_huge_zero_folio_scan;
> +	shrinker_register(huge_zero_folio_shrinker);
>
>  	deferred_split_shrinker->count_objects = deferred_split_count;
>  	deferred_split_shrinker->scan_objects = deferred_split_scan;
> @@ -875,7 +875,7 @@ static int __init thp_shrinker_init(void)
>
>  static void __init thp_shrinker_exit(void)
>  {
> -	shrinker_free(huge_zero_page_shrinker);
> +	shrinker_free(huge_zero_folio_shrinker);
>  	shrinker_free(deferred_split_shrinker);
>  }
>
> --
> 2.49.0
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker
  2025-08-01 15:30     ` David Hildenbrand
@ 2025-08-04  8:36       ` Pankaj Raghav (Samsung)
  0 siblings, 0 replies; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-08-04  8:36 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Ritesh Harjani (IBM), Suren Baghdasaryan, Ryan Roberts,
	Baolin Wang, Borislav Petkov, Ingo Molnar, H . Peter Anvin,
	Vlastimil Babka, Zi Yan, Mike Rapoport, Dave Hansen, Michal Hocko,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe, linux-kernel, willy,
	linux-mm, x86, linux-block, linux-fsdevel, Darrick J . Wong,
	mcgrof, gost.dev, hch, Pankaj Raghav

On Fri, Aug 01, 2025 at 05:30:46PM +0200, David Hildenbrand wrote:
> On 01.08.25 06:18, Ritesh Harjani (IBM) wrote:
> > "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com> writes:
> > 
> > > From: Pankaj Raghav <p.raghav@samsung.com>
> > > 
> > > As we already moved from exposing huge_zero_page to huge_zero_folio,
> > > change the name of the shrinker to reflect that.
> > > 
> > 
> > Why not change get_huge_zero_page() to get_huge_zero_folio() too, for
> > consistent naming?
> 
> Then we should also rename put_huge_zero_folio(). Renaming
> MMF_HUGE_ZERO_PAGE should probably be done separately.

Thanks Ritesh and David.

I will change them in the next version! :)

--
Pankaj


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 2/4] mm: add static huge zero folio
  2025-08-01  4:23   ` Ritesh Harjani
@ 2025-08-04  8:41     ` Pankaj Raghav (Samsung)
  0 siblings, 0 replies; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-08-04  8:41 UTC (permalink / raw)
  To: Ritesh Harjani
  Cc: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, David Hildenbrand,
	Lorenzo Stoakes, Andrew Morton, Thomas Gleixner, Nico Pache,
	Dev Jain, Liam R . Howlett, Jens Axboe, linux-kernel, willy,
	linux-mm, x86, linux-block, linux-fsdevel, Darrick J . Wong,
	mcgrof, gost.dev, hch, Pankaj Raghav

> > This option can waste memory in small systems or systems with 64k base
> > page size. So make it an opt-in and also add an option from individual
> > architecture so that we don't enable this feature for larger base page
> > size systems.
> 
> Can you please help me understand why will there be memory waste with
> 64k base pagesize, if this feature gets enabled?
> 
> Is it because systems with 64k base pagsize can have a much larger PMD
> size then 2M and hence this static huge folio won't really get used?

Yeah, exactly. More than 2M seems to be excessive for zeroing.

> 
> Just want to understand this better. On Power with Radix MMU, PMD size
> is still 2M, but with Hash it can be 16M.
> So I was considering if we should enable this with Radix. Hence the ask
> to better understand this.

I enabled only for x86 as a part of this series to reduce the scope. But
the idea is to enable for all architectures with reasonable PMD size,
like ARM with 4k, Power with Radix MMU, etc.

Once we get the base patches up, I can follow up with enabling for those
architectures.

--
Pankaj


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC v2 2/4] mm: add static huge zero folio
  2025-08-01 15:49   ` David Hildenbrand
@ 2025-08-04 10:41     ` Pankaj Raghav (Samsung)
  0 siblings, 0 replies; 16+ messages in thread
From: Pankaj Raghav (Samsung) @ 2025-08-04 10:41 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Suren Baghdasaryan, Ryan Roberts, Baolin Wang, Borislav Petkov,
	Ingo Molnar, H . Peter Anvin, Vlastimil Babka, Zi Yan,
	Mike Rapoport, Dave Hansen, Michal Hocko, Lorenzo Stoakes,
	Andrew Morton, Thomas Gleixner, Nico Pache, Dev Jain,
	Liam R . Howlett, Jens Axboe, linux-kernel, willy, linux-mm, x86,
	linux-block, linux-fsdevel, Darrick J . Wong, mcgrof, gost.dev,
	hch, Pankaj Raghav

On Fri, Aug 01, 2025 at 05:49:10PM +0200, David Hildenbrand wrote:
> On 24.07.25 16:49, Pankaj Raghav (Samsung) wrote:
> > From: Pankaj Raghav <p.raghav@samsung.com>
> > 
> > There are many places in the kernel where we need to zeroout larger
> > chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
> > is limited by PAGE_SIZE.
> > 
> > This is especially annoying in block devices and filesystems where we
> > attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
> > bvec support in block layer, it is much more efficient to send out
> > larger zero pages as a part of single bvec.
> > 
> > This concern was raised during the review of adding LBS support to
> > XFS[1][2].
> > 
> > Usually huge_zero_folio is allocated on demand, and it will be
> > deallocated by the shrinker if there are no users of it left. At moment,
> > huge_zero_folio infrastructure refcount is tied to the process lifetime
> > that created it. This might not work for bio layer as the completions
> > can be async and the process that created the huge_zero_folio might no
> > longer be alive. And, one of the main point that came during discussion
> > is to have something bigger than zero page as a drop-in replacement.
> > 
> > Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate
> 
> "... will result in allocating the huge zero folio on first request, if not already allocated, and turn it static such that it can never get freed."

Sounds good.
> 
> > the huge_zero_folio, and it will never drop the reference. This makes
> > using the huge_zero_folio without having to pass any mm struct and does
> > not tie the lifetime of the zero folio to anything, making it a drop-in
> > replacement for ZERO_PAGE.
> > 
> > If STATIC_HUGE_ZERO_FOLIO config option is enabled, then
> > mm_get_huge_zero_folio() will simply return this page instead of
> > dynamically allocating a new PMD page.
> > 
> > This option can waste memory in small systems or systems with 64k base
> > page size. So make it an opt-in and also add an option from individual
> > architecture so that we don't enable this feature for larger base page
> > size systems.
> > > [1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/
> > [2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/
> > 
> > Co-developed-by: David Hildenbrand <david@redhat.com>
> > Signed-off-by: David Hildenbrand <david@redhat.com>
> > Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> > ---
> >   arch/x86/Kconfig        |  1 +
> >   include/linux/huge_mm.h | 18 ++++++++++++++++++
> >   mm/Kconfig              | 21 +++++++++++++++++++++
> >   mm/huge_memory.c        | 42 +++++++++++++++++++++++++++++++++++++++++
> >   4 files changed, 82 insertions(+)
> > 
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index 0ce86e14ab5e..8e2aa1887309 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -153,6 +153,7 @@ config X86
> >   	select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP	if X86_64
> >   	select ARCH_WANT_HUGETLB_VMEMMAP_PREINIT if X86_64
> >   	select ARCH_WANTS_THP_SWAP		if X86_64
> > +	select ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO if X86_64
> >   	select ARCH_HAS_PARANOID_L1D_FLUSH
> >   	select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
> >   	select BUILDTIME_TABLE_SORT
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index 7748489fde1b..78ebceb61d0e 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -476,6 +476,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
> >   extern struct folio *huge_zero_folio;
> >   extern unsigned long huge_zero_pfn;
> > +extern atomic_t huge_zero_folio_is_static;
> >   static inline bool is_huge_zero_folio(const struct folio *folio)
> >   {
> > @@ -494,6 +495,18 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
> >   struct folio *mm_get_huge_zero_folio(struct mm_struct *mm);
> >   void mm_put_huge_zero_folio(struct mm_struct *mm);
> > +struct folio *__get_static_huge_zero_folio(void);
> > +
> > +static inline struct folio *get_static_huge_zero_folio(void)
> > +{
> > +	if (!IS_ENABLED(CONFIG_STATIC_HUGE_ZERO_FOLIO))
> > +		return NULL;
> > +
> > +	if (likely(atomic_read(&huge_zero_folio_is_static)))
> > +		return huge_zero_folio;
> > +
> > +	return __get_static_huge_zero_folio();
> > +}
> >   static inline bool thp_migration_supported(void)
> >   {
> > @@ -685,6 +698,11 @@ static inline int change_huge_pud(struct mmu_gather *tlb,
> >   {
> >   	return 0;
> >   }
> > +
> > +static inline struct folio *get_static_huge_zero_folio(void)
> > +{
> > +	return NULL;
> > +}
> >   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >   static inline int split_folio_to_list_to_order(struct folio *folio,
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 0287e8d94aea..e2132fcf2ccb 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -835,6 +835,27 @@ config ARCH_WANT_GENERAL_HUGETLB
> >   config ARCH_WANTS_THP_SWAP
> >   	def_bool n
> > +config ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO
> > +	def_bool n
> > +
> > +config STATIC_HUGE_ZERO_FOLIO
> > +	bool "Allocate a PMD sized folio for zeroing"
> > +	depends on ARCH_WANTS_STATIC_HUGE_ZERO_FOLIO && TRANSPARENT_HUGEPAGE
> > +	help
> > +	  Without this config enabled, the huge zero folio is allocated on
> > +	  demand and freed under memory pressure once no longer in use.
> > +	  To detect remaining users reliably, references to the huge zero folio
> > +	  must be tracked precisely, so it is commonly only available for mapping
> > +	  it into user page tables.
> > +
> > +	  With this config enabled, the huge zero folio can also be used
> > +	  for other purposes that do not implement precise reference counting:
> > +	  it is still allocated on demand, but never freed, allowing for more
> > +	  wide-spread use, for example, when performing I/O similar to the
> > +	  traditional shared zeropage.
> > +
> > +	  Not suitable for memory constrained systems.
> > +
> >   config MM_ID
> >   	def_bool n
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 5d8365d1d3e9..c160c37f4d31 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -75,6 +75,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
> >   static bool split_underused_thp = true;
> >   static atomic_t huge_zero_refcount;
> > +atomic_t huge_zero_folio_is_static __read_mostly;
> >   struct folio *huge_zero_folio __read_mostly;
> >   unsigned long huge_zero_pfn __read_mostly = ~0UL;
> >   unsigned long huge_anon_orders_always __read_mostly;
> > @@ -266,6 +267,47 @@ void mm_put_huge_zero_folio(struct mm_struct *mm)
> >   		put_huge_zero_page();
> >   }
> > +#ifdef CONFIG_STATIC_HUGE_ZERO_FOLIO
> > +#define FAIL_COUNT_LIMIT 2
> > +
> > +struct folio *__get_static_huge_zero_folio(void)
> > +{
> > +	static unsigned long fail_count_clear_timer;
> > +	static atomic_t huge_zero_static_fail_count __read_mostly;
> > +
> > +	if (unlikely(!slab_is_available()))
> > +		return NULL;
> > +
> > +	/*
> > +	 * If we failed to allocate a huge zero folio multiple times,
> > +	 * just refrain from trying for one minute before retrying to get
> > +	 * a reference again.
> > +	 */
> 
> Is this "try twice" really worth it? Just try once, and if it fails, try only again in the future.
> 
Yeah, that makes sense. Let's go with try it once for now.

> I guess we'll learn how that will behave in practice, and how we'll have to fine-tune it :)
> 
> 
> In shrink_huge_zero_page_scan(), should we probably warn if something buggy happens?
Yeah, I can fold this in the next version. I guess WARN_ON_ONCE already
adds an unlikely to the conditition which is appropriate.

> 
> Something like
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2b4ea5a2ce7d2..b1109f8699a24 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -277,7 +277,11 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
>                                        struct shrink_control *sc)
>  {
>         if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
> -               struct folio *zero_folio = xchg(&huge_zero_folio, NULL);
> +               struct folio *zero_folio;
> +
> +               if (WARN_ON_ONCE(atomic_read(&huge_zero_folio_is_static)))
> +                       return 0;
> +               zero_folio = xchg(&huge_zero_folio, NULL);
>                 BUG_ON(zero_folio == NULL);
>                 WRITE_ONCE(huge_zero_pfn, ~0UL);
>                 folio_put(zero_folio);
> 
> 
> -- 
> Cheers,
> 
--
Pankaj


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-08-04 10:41 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-24 14:49 [RFC v2 0/4] add static huge zero folio support Pankaj Raghav (Samsung)
2025-07-24 14:49 ` [RFC v2 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
2025-07-25  2:52   ` Zi Yan
2025-08-01  4:18   ` Ritesh Harjani
2025-08-01 15:30     ` David Hildenbrand
2025-08-04  8:36       ` Pankaj Raghav (Samsung)
2025-08-01 15:53   ` Lorenzo Stoakes
2025-07-24 14:49 ` [RFC v2 2/4] mm: add static huge zero folio Pankaj Raghav (Samsung)
2025-08-01  4:23   ` Ritesh Harjani
2025-08-04  8:41     ` Pankaj Raghav (Samsung)
2025-08-01 15:49   ` David Hildenbrand
2025-08-04 10:41     ` Pankaj Raghav (Samsung)
2025-07-24 14:50 ` [RFC v2 3/4] mm: add largest_zero_folio() routine Pankaj Raghav (Samsung)
2025-08-01  4:30   ` Ritesh Harjani
2025-08-01 15:33     ` David Hildenbrand
2025-07-24 14:50 ` [RFC v2 4/4] block: use largest_zero_folio in __blkdev_issue_zero_pages() Pankaj Raghav (Samsung)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).