linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/3] Optimize folio split in memory failure
@ 2025-10-31 16:19 Zi Yan
  2025-10-31 16:19 ` [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order() Zi Yan
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Zi Yan @ 2025-10-31 16:19 UTC (permalink / raw)
  To: linmiaohe, david, jane.chu
  Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Matthew Wilcox (Oracle), Wei Yang,
	Yang Shi, linux-fsdevel, linux-kernel, linux-mm

This patchset optimizes folio split operations in memory failure code by
always splitting a folio to min_order_for_split() to minimize unusable
pages, even if min_order_for_split() is non zero and memory failure code
would take the failed path eventually for a successfully split folio.
This means instead of making the entire original folio unusable memory
failure code would only make its after-split folio, which has order of
min_order_for_split() and contains HWPoison page, unusable.
For soft offline case, since the original folio is still accessible,
no split is performed if the folio cannot be split to order-0 to prevent
potential performance loss. In addition, add split_huge_page_to_order()
to improve code readability and fix kernel-doc comment format for
folio_split() and other related functions.

It is based on mm-new without V4 of this patchset.

Background
===

This patchset is a follow-up of "[PATCH v3] mm/huge_memory: do not change
split_huge_page*() target order silently."[1] and
[PATCH v4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split
to >0 order[2], since both are separated out as hotfixes. It improves how
memory failure code handles large block size(LBS) folios with
min_order_for_split() > 0. By splitting a large folio containing HW
poisoned pages to min_order_for_split(), the after-split folios without
HW poisoned pages could be freed for reuse. To achieve this, folio split
code needs to set has_hwpoisoned on after-split folios containing HW
poisoned pages and it is done in the hotfix in [2].

This patchset includes:
1. A patch adds split_huge_page_to_order(),
2. Patch 2 and Patch 3 of "[PATCH v2 0/3] Do not change split folio target
   order"[3],


Changelog
===
From V4[5]:
1. updated cover letter.
2. updated __split_unmapped_folio() comment and removed stale text.

From V3[4]:
1. Patch, mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split
   to >0 order, is sent separately as a hotfix[2].
2. made newly added new_order const in memory_failure() and
   soft_offline_in_use_page().
3. explained in a comment why in memory_failure() after-split >0 order
   folios are still treated as if the split failed.


From V2[3]:
1. Patch 1 is sent separately as a hotfix[1].
2. set has_hwpoisoned on after-split folios if any contains HW poisoned
   pages.
3. added split_huge_page_to_order().
4. added a missing newline after variable decalaration.
5. added /* release= */ to try_to_split_thp_page().
6. restructured try_to_split_thp_page() in memory_failure().
7. fixed a typo.
8. reworded the comment in soft_offline_in_use_page() for better
   understanding.


Link: https://lore.kernel.org/all/20251017013630.139907-1-ziy@nvidia.com/ [1]
Link: https://lore.kernel.org/all/20251023030521.473097-1-ziy@nvidia.com/ [2]
Link: https://lore.kernel.org/all/20251016033452.125479-1-ziy@nvidia.com/ [3]
Link: https://lore.kernel.org/all/20251022033531.389351-1-ziy@nvidia.com/ [4]
Link: https://lore.kernel.org/all/20251030014020.475659-1-ziy@nvidia.com/ [5]

Zi Yan (3):
  mm/huge_memory: add split_huge_page_to_order()
  mm/memory-failure: improve large block size folio handling.
  mm/huge_memory: fix kernel-doc comments for folio_split() and related.

 include/linux/huge_mm.h | 22 ++++++++++++++------
 mm/huge_memory.c        | 45 ++++++++++++++++++++++-------------------
 mm/memory-failure.c     | 31 ++++++++++++++++++++++++----
 3 files changed, 67 insertions(+), 31 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order()
  2025-10-31 16:19 [PATCH v5 0/3] Optimize folio split in memory failure Zi Yan
@ 2025-10-31 16:19 ` Zi Yan
  2025-10-31 16:20 ` [PATCH v5 2/3] mm/memory-failure: improve large block size folio handling Zi Yan
  2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
  2 siblings, 0 replies; 9+ messages in thread
From: Zi Yan @ 2025-10-31 16:19 UTC (permalink / raw)
  To: linmiaohe, david, jane.chu
  Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Matthew Wilcox (Oracle), Wei Yang,
	Yang Shi, linux-fsdevel, linux-kernel, linux-mm

When caller does not supply a list to split_huge_page_to_list_to_order(),
use split_huge_page_to_order() instead.

Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 7698b3542c4f..34f8d8453bf3 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -381,6 +381,10 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
 {
 	return __split_huge_page_to_list_to_order(page, list, new_order, false);
 }
+static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
+{
+	return split_huge_page_to_list_to_order(page, NULL, new_order);
+}
 
 /*
  * try_folio_split_to_order - try to split a @folio at @page to @new_order using
@@ -400,8 +404,7 @@ static inline int try_folio_split_to_order(struct folio *folio,
 		struct page *page, unsigned int new_order)
 {
 	if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
-		return split_huge_page_to_list_to_order(&folio->page, NULL,
-				new_order);
+		return split_huge_page_to_order(&folio->page, new_order);
 	return folio_split(folio, new_order, page, NULL);
 }
 static inline int split_huge_page(struct page *page)
@@ -590,6 +593,11 @@ split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 	VM_WARN_ON_ONCE_PAGE(1, page);
 	return -EINVAL;
 }
+static inline int split_huge_page_to_order(struct page *page, unsigned int new_order)
+{
+	VM_WARN_ON_ONCE_PAGE(1, page);
+	return -EINVAL;
+}
 static inline int split_huge_page(struct page *page)
 {
 	VM_WARN_ON_ONCE_PAGE(1, page);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v5 2/3] mm/memory-failure: improve large block size folio handling.
  2025-10-31 16:19 [PATCH v5 0/3] Optimize folio split in memory failure Zi Yan
  2025-10-31 16:19 ` [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order() Zi Yan
@ 2025-10-31 16:20 ` Zi Yan
  2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
  2 siblings, 0 replies; 9+ messages in thread
From: Zi Yan @ 2025-10-31 16:20 UTC (permalink / raw)
  To: linmiaohe, david, jane.chu
  Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Matthew Wilcox (Oracle), Wei Yang,
	Yang Shi, linux-fsdevel, linux-kernel, linux-mm

Large block size (LBS) folios cannot be split to order-0 folios but
min_order_for_folio(). Current split fails directly, but that is not
optimal. Split the folio to min_order_for_folio(), so that, after split,
only the folio containing the poisoned page becomes unusable instead.

For soft offline, do not split the large folio if its min_order_for_folio()
is not 0. Since the folio is still accessible from userspace and premature
split might lead to potential performance loss.

Suggested-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/memory-failure.c | 31 +++++++++++++++++++++++++++----
 1 file changed, 27 insertions(+), 4 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f698df156bf8..acc35c881547 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
  * there is still more to do, hence the page refcount we took earlier
  * is still needed.
  */
-static int try_to_split_thp_page(struct page *page, bool release)
+static int try_to_split_thp_page(struct page *page, unsigned int new_order,
+		bool release)
 {
 	int ret;
 
 	lock_page(page);
-	ret = split_huge_page(page);
+	ret = split_huge_page_to_order(page, new_order);
 	unlock_page(page);
 
 	if (ret && release)
@@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
 	folio_unlock(folio);
 
 	if (folio_test_large(folio)) {
+		const int new_order = min_order_for_split(folio);
+		int err;
+
 		/*
 		 * The flag must be set after the refcount is bumped
 		 * otherwise it may race with THP split.
@@ -2294,7 +2298,16 @@ int memory_failure(unsigned long pfn, int flags)
 		 * page is a valid handlable page.
 		 */
 		folio_set_has_hwpoisoned(folio);
-		if (try_to_split_thp_page(p, false) < 0) {
+		err = try_to_split_thp_page(p, new_order, /* release= */ false);
+		/*
+		 * If splitting a folio to order-0 fails, kill the process.
+		 * Split the folio regardless to minimize unusable pages.
+		 * Because the memory failure code cannot handle large
+		 * folios, this split is always treated as if it failed.
+		 */
+		if (err || new_order) {
+			/* get folio again in case the original one is split */
+			folio = page_folio(p);
 			res = -EHWPOISON;
 			kill_procs_now(p, pfn, flags, folio);
 			put_page(p);
@@ -2621,7 +2634,17 @@ static int soft_offline_in_use_page(struct page *page)
 	};
 
 	if (!huge && folio_test_large(folio)) {
-		if (try_to_split_thp_page(page, true)) {
+		const int new_order = min_order_for_split(folio);
+
+		/*
+		 * If new_order (target split order) is not 0, do not split the
+		 * folio at all to retain the still accessible large folio.
+		 * NOTE: if minimizing the number of soft offline pages is
+		 * preferred, split it to non-zero new_order like it is done in
+		 * memory_failure().
+		 */
+		if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
+						       /* release= */ true)) {
 			pr_info("%#lx: thp split failed\n", pfn);
 			return -EBUSY;
 		}
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
  2025-10-31 16:19 [PATCH v5 0/3] Optimize folio split in memory failure Zi Yan
  2025-10-31 16:19 ` [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order() Zi Yan
  2025-10-31 16:20 ` [PATCH v5 2/3] mm/memory-failure: improve large block size folio handling Zi Yan
@ 2025-10-31 16:20 ` Zi Yan
  2025-10-31 23:36   ` Wei Yang
  2025-11-05 16:10   ` Zi Yan
  2 siblings, 2 replies; 9+ messages in thread
From: Zi Yan @ 2025-10-31 16:20 UTC (permalink / raw)
  To: linmiaohe, david, jane.chu
  Cc: kernel, ziy, akpm, mcgrof, nao.horiguchi, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Matthew Wilcox (Oracle), Wei Yang,
	Yang Shi, linux-fsdevel, linux-kernel, linux-mm

try_folio_split_to_order(), folio_split, __folio_split(), and
__split_unmapped_folio() do not have correct kernel-doc comment format.
Fix them.

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 10 +++++----
 mm/huge_memory.c        | 45 ++++++++++++++++++++++-------------------
 2 files changed, 30 insertions(+), 25 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 34f8d8453bf3..cbb2243f8e56 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -386,9 +386,9 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
 	return split_huge_page_to_list_to_order(page, NULL, new_order);
 }
 
-/*
- * try_folio_split_to_order - try to split a @folio at @page to @new_order using
- * non uniform split.
+/**
+ * try_folio_split_to_order() - try to split a @folio at @page to @new_order
+ * using non uniform split.
  * @folio: folio to be split
  * @page: split to @new_order at the given page
  * @new_order: the target split order
@@ -398,7 +398,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
  * folios are put back to LRU list. Use min_order_for_split() to get the lower
  * bound of @new_order.
  *
- * Return: 0: split is successful, otherwise split failed.
+ * Return: 0 - split is successful, otherwise split failed.
  */
 static inline int try_folio_split_to_order(struct folio *folio,
 		struct page *page, unsigned int new_order)
@@ -486,6 +486,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
 /**
  * folio_test_pmd_mappable - Can we map this folio with a PMD?
  * @folio: The folio to test
+ *
+ * Return: true - @folio can be mapped, false - @folio cannot be mapped.
  */
 static inline bool folio_test_pmd_mappable(struct folio *folio)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0e24bb7e90d0..ad2fc52651a6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
 		ClearPageCompound(&folio->page);
 }
 
-/*
- * It splits an unmapped @folio to lower order smaller folios in two ways.
+/**
+ * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
+ * two ways: uniform split or non-uniform split.
  * @folio: the to-be-split folio
  * @new_order: the smallest order of the after split folios (since buddy
  *             allocator like split generates folios with orders from @folio's
@@ -3589,22 +3590,22 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
  *    uniform_split is false.
  *
  * The high level flow for these two methods are:
- * 1. uniform split: a single __split_folio_to_order() is called to split the
- *    @folio into @new_order, then we traverse all the resulting folios one by
- *    one in PFN ascending order and perform stats, unfreeze, adding to list,
- *    and file mapping index operations.
- * 2. non-uniform split: in general, folio_order - @new_order calls to
- *    __split_folio_to_order() are made in a for loop to split the @folio
- *    to one lower order at a time. The resulting small folios are processed
- *    like what is done during the traversal in 1, except the one containing
- *    @page, which is split in next for loop.
+ * 1. uniform split: @xas is split with no expectation of failure and a single
+ *    __split_folio_to_order() is called to split the @folio into @new_order
+ *    along with stats update.
+ * 2. non-uniform split: folio_order - @new_order calls to
+ *    __split_folio_to_order() are expected to be made in a for loop to split
+ *    the @folio to one lower order at a time. The folio containing @page is
+ *    split in each iteration. @xas is split into half in each iteration and
+ *    can fail. A failed @xas split leaves split folios as is without merging
+ *    them back.
  *
  * After splitting, the caller's folio reference will be transferred to the
  * folio containing @page. The caller needs to unlock and/or free after-split
  * folios if necessary.
  *
- * For !uniform_split, when -ENOMEM is returned, the original folio might be
- * split. The caller needs to check the input folio.
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
+ * split but not to @new_order, the caller needs to check)
  */
 static int __split_unmapped_folio(struct folio *folio, int new_order,
 		struct page *split_at, struct xa_state *xas,
@@ -3722,8 +3723,8 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
 	return true;
 }
 
-/*
- * __folio_split: split a folio at @split_at to a @new_order folio
+/**
+ * __folio_split() - split a folio at @split_at to a @new_order folio
  * @folio: folio to split
  * @new_order: the order of the new folio
  * @split_at: a page within the new folio
@@ -3741,7 +3742,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
  * 1. for uniform split, @lock_at points to one of @folio's subpages;
  * 2. for buddy allocator like (non-uniform) split, @lock_at points to @folio.
  *
- * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
  * split but not to @new_order, the caller needs to check)
  */
 static int __folio_split(struct folio *folio, unsigned int new_order,
@@ -4130,14 +4131,13 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
 				unmapped);
 }
 
-/*
- * folio_split: split a folio at @split_at to a @new_order folio
+/**
+ * folio_split() - split a folio at @split_at to a @new_order folio
  * @folio: folio to split
  * @new_order: the order of the new folio
  * @split_at: a page within the new folio
- *
- * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
- * split but not to @new_order, the caller needs to check)
+ * @list: after-split folios are added to @list if not null, otherwise to LRU
+ *        list
  *
  * It has the same prerequisites and returns as
  * split_huge_page_to_list_to_order().
@@ -4151,6 +4151,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
  * [order-4, {order-3}, order-3, order-5, order-6, order-7, order-8].
  *
  * After split, folio is left locked for caller.
+ *
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
+ * split but not to @new_order, the caller needs to check)
  */
 int folio_split(struct folio *folio, unsigned int new_order,
 		struct page *split_at, struct list_head *list)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
  2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
@ 2025-10-31 23:36   ` Wei Yang
  2025-10-31 23:52     ` Zi Yan
  2025-11-05 16:10   ` Zi Yan
  1 sibling, 1 reply; 9+ messages in thread
From: Wei Yang @ 2025-10-31 23:36 UTC (permalink / raw)
  To: Zi Yan
  Cc: linmiaohe, david, jane.chu, kernel, akpm, mcgrof, nao.horiguchi,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
	Matthew Wilcox (Oracle), Wei Yang, Yang Shi, linux-fsdevel,
	linux-kernel, linux-mm

On Fri, Oct 31, 2025 at 12:20:01PM -0400, Zi Yan wrote:
[...]
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index 0e24bb7e90d0..ad2fc52651a6 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> 		ClearPageCompound(&folio->page);
> }
> 
>-/*
>- * It splits an unmapped @folio to lower order smaller folios in two ways.
>+/**
>+ * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
>+ * two ways: uniform split or non-uniform split.
>  * @folio: the to-be-split folio
>  * @new_order: the smallest order of the after split folios (since buddy
>  *             allocator like split generates folios with orders from @folio's
>@@ -3589,22 +3590,22 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>  *    uniform_split is false.
>  *
>  * The high level flow for these two methods are:
>- * 1. uniform split: a single __split_folio_to_order() is called to split the
>- *    @folio into @new_order, then we traverse all the resulting folios one by
>- *    one in PFN ascending order and perform stats, unfreeze, adding to list,
>- *    and file mapping index operations.
>- * 2. non-uniform split: in general, folio_order - @new_order calls to
>- *    __split_folio_to_order() are made in a for loop to split the @folio
>- *    to one lower order at a time. The resulting small folios are processed
>- *    like what is done during the traversal in 1, except the one containing
>- *    @page, which is split in next for loop.
>+ * 1. uniform split: @xas is split with no expectation of failure and a single
>+ *    __split_folio_to_order() is called to split the @folio into @new_order
>+ *    along with stats update.
>+ * 2. non-uniform split: folio_order - @new_order calls to
>+ *    __split_folio_to_order() are expected to be made in a for loop to split
>+ *    the @folio to one lower order at a time. The folio containing @page is

Hope it is not annoying.

The parameter's name is @split_at, maybe we misuse it?

s/containing @page/containing @split_at/

>+ *    split in each iteration. @xas is split into half in each iteration and
>+ *    can fail. A failed @xas split leaves split folios as is without merging
>+ *    them back.
>  *
>  * After splitting, the caller's folio reference will be transferred to the
>  * folio containing @page. The caller needs to unlock and/or free after-split

The same above.

And probably there is another one in above this comment(not shown here).

>  * folios if necessary.
>  *
>- * For !uniform_split, when -ENOMEM is returned, the original folio might be
>- * split. The caller needs to check the input folio.
>+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>+ * split but not to @new_order, the caller needs to check)
>  */
> static int __split_unmapped_folio(struct folio *folio, int new_order,
> 		struct page *split_at, struct xa_state *xas,

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
  2025-10-31 23:36   ` Wei Yang
@ 2025-10-31 23:52     ` Zi Yan
  2025-11-01  0:08       ` Wei Yang
  2025-11-03 16:38       ` David Hildenbrand (Red Hat)
  0 siblings, 2 replies; 9+ messages in thread
From: Zi Yan @ 2025-10-31 23:52 UTC (permalink / raw)
  To: akpm, Wei Yang
  Cc: linmiaohe, david, jane.chu, kernel, mcgrof, nao.horiguchi,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
	Matthew Wilcox (Oracle), Yang Shi, linux-fsdevel, linux-kernel,
	linux-mm

On 31 Oct 2025, at 19:36, Wei Yang wrote:

> On Fri, Oct 31, 2025 at 12:20:01PM -0400, Zi Yan wrote:
> [...]
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 0e24bb7e90d0..ad2fc52651a6 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>> 		ClearPageCompound(&folio->page);
>> }
>>
>> -/*
>> - * It splits an unmapped @folio to lower order smaller folios in two ways.
>> +/**
>> + * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
>> + * two ways: uniform split or non-uniform split.
>>  * @folio: the to-be-split folio
>>  * @new_order: the smallest order of the after split folios (since buddy
>>  *             allocator like split generates folios with orders from @folio's
>> @@ -3589,22 +3590,22 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>>  *    uniform_split is false.
>>  *
>>  * The high level flow for these two methods are:
>> - * 1. uniform split: a single __split_folio_to_order() is called to split the
>> - *    @folio into @new_order, then we traverse all the resulting folios one by
>> - *    one in PFN ascending order and perform stats, unfreeze, adding to list,
>> - *    and file mapping index operations.
>> - * 2. non-uniform split: in general, folio_order - @new_order calls to
>> - *    __split_folio_to_order() are made in a for loop to split the @folio
>> - *    to one lower order at a time. The resulting small folios are processed
>> - *    like what is done during the traversal in 1, except the one containing
>> - *    @page, which is split in next for loop.
>> + * 1. uniform split: @xas is split with no expectation of failure and a single
>> + *    __split_folio_to_order() is called to split the @folio into @new_order
>> + *    along with stats update.
>> + * 2. non-uniform split: folio_order - @new_order calls to
>> + *    __split_folio_to_order() are expected to be made in a for loop to split
>> + *    the @folio to one lower order at a time. The folio containing @page is
>
> Hope it is not annoying.
>
> The parameter's name is @split_at, maybe we misuse it?
>
> s/containing @page/containing @split_at/
>
>> + *    split in each iteration. @xas is split into half in each iteration and
>> + *    can fail. A failed @xas split leaves split folios as is without merging
>> + *    them back.
>>  *
>>  * After splitting, the caller's folio reference will be transferred to the
>>  * folio containing @page. The caller needs to unlock and/or free after-split
>
> The same above.
>
> And probably there is another one in above this comment(not shown here).

Hi Andrew,

Do you mind applying this fixup to address Wei's concerns?

Thanks.

From e1894a4e7ac95bdfe333cf5bee567f0ff90ddf5d Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy@nvidia.com>
Date: Fri, 31 Oct 2025 19:50:55 -0400
Subject: [PATCH] mm/huge_memory: kernel-doc fixup

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/huge_memory.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ad2fc52651a6..a30fee2001b5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3586,7 +3586,7 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
  *    uniform_split is true.
  * 2. buddy allocator like (non-uniform) split: the given @folio is split into
  *    half and one of the half (containing the given page) is split into half
- *    until the given @page's order becomes @new_order. This is done when
+ *    until the given @folio's order becomes @new_order. This is done when
  *    uniform_split is false.
  *
  * The high level flow for these two methods are:
@@ -3595,14 +3595,14 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
  *    along with stats update.
  * 2. non-uniform split: folio_order - @new_order calls to
  *    __split_folio_to_order() are expected to be made in a for loop to split
- *    the @folio to one lower order at a time. The folio containing @page is
- *    split in each iteration. @xas is split into half in each iteration and
+ *    the @folio to one lower order at a time. The folio containing @split_at
+ *    is split in each iteration. @xas is split into half in each iteration and
  *    can fail. A failed @xas split leaves split folios as is without merging
  *    them back.
  *
  * After splitting, the caller's folio reference will be transferred to the
- * folio containing @page. The caller needs to unlock and/or free after-split
- * folios if necessary.
+ * folio containing @split_at. The caller needs to unlock and/or free
+ * after-split folios if necessary.
  *
  * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
  * split but not to @new_order, the caller needs to check)
-- 
2.51.0





--
Best Regards,
Yan, Zi

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
  2025-10-31 23:52     ` Zi Yan
@ 2025-11-01  0:08       ` Wei Yang
  2025-11-03 16:38       ` David Hildenbrand (Red Hat)
  1 sibling, 0 replies; 9+ messages in thread
From: Wei Yang @ 2025-11-01  0:08 UTC (permalink / raw)
  To: Zi Yan
  Cc: akpm, Wei Yang, linmiaohe, david, jane.chu, kernel, mcgrof,
	nao.horiguchi, Lorenzo Stoakes, Baolin Wang, Liam R. Howlett,
	Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
	Matthew Wilcox (Oracle), Yang Shi, linux-fsdevel, linux-kernel,
	linux-mm

On Fri, Oct 31, 2025 at 07:52:28PM -0400, Zi Yan wrote:
>On 31 Oct 2025, at 19:36, Wei Yang wrote:
>
>> On Fri, Oct 31, 2025 at 12:20:01PM -0400, Zi Yan wrote:
>> [...]
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 0e24bb7e90d0..ad2fc52651a6 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>>> 		ClearPageCompound(&folio->page);
>>> }
>>>
>>> -/*
>>> - * It splits an unmapped @folio to lower order smaller folios in two ways.
>>> +/**
>>> + * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
>>> + * two ways: uniform split or non-uniform split.
>>>  * @folio: the to-be-split folio
>>>  * @new_order: the smallest order of the after split folios (since buddy
>>>  *             allocator like split generates folios with orders from @folio's
>>> @@ -3589,22 +3590,22 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>>>  *    uniform_split is false.
>>>  *
>>>  * The high level flow for these two methods are:
>>> - * 1. uniform split: a single __split_folio_to_order() is called to split the
>>> - *    @folio into @new_order, then we traverse all the resulting folios one by
>>> - *    one in PFN ascending order and perform stats, unfreeze, adding to list,
>>> - *    and file mapping index operations.
>>> - * 2. non-uniform split: in general, folio_order - @new_order calls to
>>> - *    __split_folio_to_order() are made in a for loop to split the @folio
>>> - *    to one lower order at a time. The resulting small folios are processed
>>> - *    like what is done during the traversal in 1, except the one containing
>>> - *    @page, which is split in next for loop.
>>> + * 1. uniform split: @xas is split with no expectation of failure and a single
>>> + *    __split_folio_to_order() is called to split the @folio into @new_order
>>> + *    along with stats update.
>>> + * 2. non-uniform split: folio_order - @new_order calls to
>>> + *    __split_folio_to_order() are expected to be made in a for loop to split
>>> + *    the @folio to one lower order at a time. The folio containing @page is
>>
>> Hope it is not annoying.
>>
>> The parameter's name is @split_at, maybe we misuse it?
>>
>> s/containing @page/containing @split_at/
>>
>>> + *    split in each iteration. @xas is split into half in each iteration and
>>> + *    can fail. A failed @xas split leaves split folios as is without merging
>>> + *    them back.
>>>  *
>>>  * After splitting, the caller's folio reference will be transferred to the
>>>  * folio containing @page. The caller needs to unlock and/or free after-split
>>
>> The same above.
>>
>> And probably there is another one in above this comment(not shown here).
>
>Hi Andrew,
>
>Do you mind applying this fixup to address Wei's concerns?
>
>Thanks.
>
>From e1894a4e7ac95bdfe333cf5bee567f0ff90ddf5d Mon Sep 17 00:00:00 2001
>From: Zi Yan <ziy@nvidia.com>
>Date: Fri, 31 Oct 2025 19:50:55 -0400
>Subject: [PATCH] mm/huge_memory: kernel-doc fixup
>
>Signed-off-by: Zi Yan <ziy@nvidia.com>
>---
> mm/huge_memory.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index ad2fc52651a6..a30fee2001b5 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -3586,7 +3586,7 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>  *    uniform_split is true.
>  * 2. buddy allocator like (non-uniform) split: the given @folio is split into
>  *    half and one of the half (containing the given page) is split into half
>- *    until the given @page's order becomes @new_order. This is done when
>+ *    until the given @folio's order becomes @new_order. This is done when
>  *    uniform_split is false.
>  *
>  * The high level flow for these two methods are:
>@@ -3595,14 +3595,14 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>  *    along with stats update.
>  * 2. non-uniform split: folio_order - @new_order calls to
>  *    __split_folio_to_order() are expected to be made in a for loop to split
>- *    the @folio to one lower order at a time. The folio containing @page is
>- *    split in each iteration. @xas is split into half in each iteration and
>+ *    the @folio to one lower order at a time. The folio containing @split_at
>+ *    is split in each iteration. @xas is split into half in each iteration and
>  *    can fail. A failed @xas split leaves split folios as is without merging
>  *    them back.
>  *
>  * After splitting, the caller's folio reference will be transferred to the
>- * folio containing @page. The caller needs to unlock and/or free after-split
>- * folios if necessary.
>+ * folio containing @split_at. The caller needs to unlock and/or free
>+ * after-split folios if necessary.
>  *
>  * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>  * split but not to @new_order, the caller needs to check)
>-- 
>2.51.0
>
>

Thanks.

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

>
>
>--
>Best Regards,
>Yan, Zi

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
  2025-10-31 23:52     ` Zi Yan
  2025-11-01  0:08       ` Wei Yang
@ 2025-11-03 16:38       ` David Hildenbrand (Red Hat)
  1 sibling, 0 replies; 9+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-11-03 16:38 UTC (permalink / raw)
  To: Zi Yan, akpm, Wei Yang
  Cc: linmiaohe, jane.chu, kernel, mcgrof, nao.horiguchi,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
	Matthew Wilcox (Oracle), Yang Shi, linux-fsdevel, linux-kernel,
	linux-mm


> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ad2fc52651a6..a30fee2001b5 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3586,7 +3586,7 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>    *    uniform_split is true.
>    * 2. buddy allocator like (non-uniform) split: the given @folio is split into
>    *    half and one of the half (containing the given page) is split into half
> - *    until the given @page's order becomes @new_order. This is done when
> + *    until the given @folio's order becomes @new_order. This is done when
>    *    uniform_split is false.
>    *
>    * The high level flow for these two methods are:
> @@ -3595,14 +3595,14 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>    *    along with stats update.
>    * 2. non-uniform split: folio_order - @new_order calls to
>    *    __split_folio_to_order() are expected to be made in a for loop to split
> - *    the @folio to one lower order at a time. The folio containing @page is
> - *    split in each iteration. @xas is split into half in each iteration and
> + *    the @folio to one lower order at a time. The folio containing @split_at
> + *    is split in each iteration. @xas is split into half in each iteration and
>    *    can fail. A failed @xas split leaves split folios as is without merging
>    *    them back.
>    *
>    * After splitting, the caller's folio reference will be transferred to the
> - * folio containing @page. The caller needs to unlock and/or free after-split
> - * folios if necessary.
> + * folio containing @split_at. The caller needs to unlock and/or free
> + * after-split folios if necessary.
>    *
>    * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>    * split but not to @new_order, the caller needs to check)

Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>

-- 
Cheers

David

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
  2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
  2025-10-31 23:36   ` Wei Yang
@ 2025-11-05 16:10   ` Zi Yan
  1 sibling, 0 replies; 9+ messages in thread
From: Zi Yan @ 2025-11-05 16:10 UTC (permalink / raw)
  To: akpm, linmiaohe, david, jane.chu
  Cc: kernel, ziy, mcgrof, nao.horiguchi, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lance Yang, Matthew Wilcox (Oracle), Wei Yang, Yang Shi,
	linux-fsdevel, linux-kernel, linux-mm

On 31 Oct 2025, at 12:20, Zi Yan wrote:

> try_folio_split_to_order(), folio_split, __folio_split(), and
> __split_unmapped_folio() do not have correct kernel-doc comment format.
> Fix them.
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Lance Yang <lance.yang@linux.dev>
> Reviewed-by: Barry Song <baohua@kernel.org>
> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>  include/linux/huge_mm.h | 10 +++++----
>  mm/huge_memory.c        | 45 ++++++++++++++++++++++-------------------
>  2 files changed, 30 insertions(+), 25 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 34f8d8453bf3..cbb2243f8e56 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -386,9 +386,9 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>  	return split_huge_page_to_list_to_order(page, NULL, new_order);
>  }
>
> -/*
> - * try_folio_split_to_order - try to split a @folio at @page to @new_order using
> - * non uniform split.
> +/**
> + * try_folio_split_to_order() - try to split a @folio at @page to @new_order
> + * using non uniform split.
>   * @folio: folio to be split
>   * @page: split to @new_order at the given page
>   * @new_order: the target split order
> @@ -398,7 +398,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>   * folios are put back to LRU list. Use min_order_for_split() to get the lower
>   * bound of @new_order.
>   *
> - * Return: 0: split is successful, otherwise split failed.
> + * Return: 0 - split is successful, otherwise split failed.
>   */
>  static inline int try_folio_split_to_order(struct folio *folio,
>  		struct page *page, unsigned int new_order)
> @@ -486,6 +486,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
>  /**
>   * folio_test_pmd_mappable - Can we map this folio with a PMD?
>   * @folio: The folio to test
> + *
> + * Return: true - @folio can be mapped, false - @folio cannot be mapped.
>   */
>  static inline bool folio_test_pmd_mappable(struct folio *folio)
>  {
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 0e24bb7e90d0..ad2fc52651a6 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>  		ClearPageCompound(&folio->page);
>  }
>
> -/*
> - * It splits an unmapped @folio to lower order smaller folios in two ways.
> +/**
> + * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
> + * two ways: uniform split or non-uniform split.
>   * @folio: the to-be-split folio
>   * @new_order: the smallest order of the after split folios (since buddy
>   *             allocator like split generates folios with orders from @folio's
> @@ -3589,22 +3590,22 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>   *    uniform_split is false.
>   *
>   * The high level flow for these two methods are:
> - * 1. uniform split: a single __split_folio_to_order() is called to split the
> - *    @folio into @new_order, then we traverse all the resulting folios one by
> - *    one in PFN ascending order and perform stats, unfreeze, adding to list,
> - *    and file mapping index operations.
> - * 2. non-uniform split: in general, folio_order - @new_order calls to
> - *    __split_folio_to_order() are made in a for loop to split the @folio
> - *    to one lower order at a time. The resulting small folios are processed
> - *    like what is done during the traversal in 1, except the one containing
> - *    @page, which is split in next for loop.
> + * 1. uniform split: @xas is split with no expectation of failure and a single
> + *    __split_folio_to_order() is called to split the @folio into @new_order
> + *    along with stats update.
> + * 2. non-uniform split: folio_order - @new_order calls to
> + *    __split_folio_to_order() are expected to be made in a for loop to split
> + *    the @folio to one lower order at a time. The folio containing @page is
> + *    split in each iteration. @xas is split into half in each iteration and
> + *    can fail. A failed @xas split leaves split folios as is without merging
> + *    them back.
>   *

This change caused an error and a warning from docutils[1].
The following patch fixed the issue.

Hi Andrew,

Do you mind folding this in? This fixup can just go after[2]. And both
can be folded into this patch.

Thanks.


From c49e940cc23e051e3f4faf0bca002a05bb6b0dc1 Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy@nvidia.com>
Date: Wed, 5 Nov 2025 11:01:09 -0500
Subject: [PATCH] mm/huge_memory: fix an error and a warning from docutils

Add a newline to fix the following error and warning:

Documentation/core-api/mm-api:134: mm/huge_memory.c:3593: ERROR: Unexpected indentation. [docutils]
Documentation/core-api/mm-api:134: mm/huge_memory.c:3595: WARNING: Block quote ends without a blank line; unexpected unindent. [docutils]

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/huge_memory.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a30fee2001b5..36fc4ff002c9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3590,6 +3590,7 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
  *    uniform_split is false.
  *
  * The high level flow for these two methods are:
+ *
  * 1. uniform split: @xas is split with no expectation of failure and a single
  *    __split_folio_to_order() is called to split the @folio into @new_order
  *    along with stats update.
-- 
2.51.0





[1] https://lore.kernel.org/all/20251105162314.004e2764@canb.auug.org.au/
[2] https://lore.kernel.org/all/BE7AC5F3-9E64-4923-861D-C2C4E0CB91EB@nvidia.com/

Best Regards,
Yan, Zi

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-11-05 16:10 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-31 16:19 [PATCH v5 0/3] Optimize folio split in memory failure Zi Yan
2025-10-31 16:19 ` [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order() Zi Yan
2025-10-31 16:20 ` [PATCH v5 2/3] mm/memory-failure: improve large block size folio handling Zi Yan
2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-31 23:36   ` Wei Yang
2025-10-31 23:52     ` Zi Yan
2025-11-01  0:08       ` Wei Yang
2025-11-03 16:38       ` David Hildenbrand (Red Hat)
2025-11-05 16:10   ` Zi Yan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).